Showing posts with label tables. Show all posts
Showing posts with label tables. Show all posts

Sunday, March 25, 2012

Best practise regarding Tables?

Hi!

I have 6-7 tables total containing subobjects for different objects like phonenumbers and emails for contacts.
This meaning i have to do some querys on each detailpage. I will use stored proc for fetching subobjects.

My question therefore was: if i could merge subobjects into same tables making me use perhaps 2 querys instead of 4 and thus perhaps doubling the size
of the tables would this have a possibility of giving me any performance difference whatsoever?

As i see pros arefewer querys, and cons are larger tables and i will need another field separating the types of objects in the table.

Anyone have insight to this?

I would be curious to see what you mean by "subobjects" - usually when you ask a question like this it is a good idea to post your table structure. You'll get more specific help that way.

In general, you should strive for proper normalization of your database. This normally means more tables, and the tables are thin, not wide. However, if you have similar sets of data that can be "typed" and placed into the same table, that is normally a good route to take. For example, if you have a business phone numbers and home phone numbers, I would recommend putting these in the same table and adding a field to specify its type, rather than having two separate tables. On the other hand, you shouldn't try to squeeze more disparate types of data into the same table and type them.

|||

By objects i mean like applications, users, groups.
By subobjects i mean phonenumbers, emails, contacts(for example when an application have a number of contacts linked to it) etc

To illustrate i post some of my tables:

First an object table:

CREATE TABLE [sw20aut].[sw_apps] (
[id] [int] IDENTITY (1, 1) NOT NULL ,
[kund_id] [int] NOT NULL ,
[namn] [varchar] (80) COLLATE Finnish_Swedish_CI_AS NOT NULL ,
[beskrivning] [text] COLLATE Finnish_Swedish_CI_AS NOT NULL DEFAULT ('') ,
[usr_rubrik] [varchar] (60) COLLATE Finnish_Swedish_CI_AS NOT NULL DEFAULT ('Övrig information') ,
[usr_information] [text] COLLATE Finnish_Swedish_CI_AS NOT NULL DEFAULT ('') ,
[kat] [int] NOT NULL DEFAULT('0') ,
[typ] [tinyint] NOT NULL DEFAULT('0') ,
[skapare] [varchar] (15) COLLATE Finnish_Swedish_CI_AS NOT NULL ,
[skapad] [datetime] NOT NULL ,
[del] [bit] NOT NULL DEFAULT ('0')
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

Here are some subtables:

CREATE TABLE [sw20aut].[sw_sub_email] (
[id] [int] IDENTITY (1, 1) NOT NULL ,
[p_typ] [varchar] (3) COLLATE Finnish_Swedish_CI_AS NOT NULL ,
[p_id] [int] NOT NULL ,
[epost] [varchar] (60) COLLATE Finnish_Swedish_CI_AS NULL ,
[kommentar] [varchar] (60) COLLATE Finnish_Swedish_CI_AS NOT NULL
) ON [PRIMARY]

CREATE TABLE [sw20aut].[sw_sub_phone] (
[id] [int] IDENTITY (1, 1) NOT NULL ,
[p_typ] [varchar] (3) COLLATE Finnish_Swedish_CI_AS NOT NULL ,
[p_id] [int] NOT NULL ,
[tel_land] [varchar] (6) COLLATE Finnish_Swedish_CI_AS NULL ,
[tel_rikt] [varchar] (6) COLLATE Finnish_Swedish_CI_AS NULL ,
[tel_nr] [varchar] (30) COLLATE Finnish_Swedish_CI_AS NULL ,
[kommentar] [varchar] (60) COLLATE Finnish_Swedish_CI_AS NOT NULL
) ON [PRIMARY]

CREATE TABLE [sw20aut].[sw_sub_contacts] (
[id] [int] IDENTITY (1, 1) NOT NULL ,
[p_typ] [varchar] (3) COLLATE Finnish_Swedish_CI_AS NOT NULL ,
[p_id] [int] NOT NULL ,
[kon_id] [int] NOT NULL ,
[kon_kid] [int] NOT NULL ,
[kommentar] [varchar] (60) COLLATE Finnish_Swedish_CI_AS NOT NULL
) ON [PRIMARY]

CREATE TABLE [sw20aut].[sw_sub_grupper] (
[id] [int] IDENTITY (1, 1) NOT NULL ,
[p_typ] [varchar] (3) COLLATE Finnish_Swedish_CI_AS NOT NULL ,
[p_id] [int] NOT NULL ,
[grp_id] [int] NOT NULL ,
[grp_kid] [int] NOT NULL ,
[kommentar] [varchar] (60) COLLATE Finnish_Swedish_CI_AS NOT NULL
) ON [PRIMARY]

p_id = the parents id
p_typ = type of parent, for example app, con or grp

As seen above i could join sub_contacts and sub_groups easily(need another field to set if its a contact or group though) and i could probably join email and phone as well, in worst case using some kind of commaseparated format to separate countrycode, areacode etc. I have a couple of other similar sub tables as well The question is if it could be worth the effort or the larger tables(more rows) and another field to sort by would negate the advantage of fewer querys when listing? Practically it would mean a step from 6-7 querys on worst pages down to 3-4.
I use batched dynamic querys and some stored procs depending on circumstances

|||

The baseline for the table definition is files and association so if what I am seeing is correct you have maybe one or two secondary tables and the relationship is actually quantified by upper and lower bound Cardinality meaning number of expected rows. And I don't think name should be VarChar 80 more like 30-50. Try the links below for functional dependency and free tables you can clone. Hope this helps.

http://www.databaseanswers.org/data_models/

http://www.utexas.edu/its/windows/database/datamodeling/rm/rm7.html

|||

The problem regarding sizeing the name columns in different tables are partly because i will have to import data from old tables used by the former application.

These tables are very poorly designed(even to my standards ;) ) and have caused users to add more than just name in the namefields so forth. I will control this better programatically but
nevertheless stripping of existing text in the field would not be popular so my thought weher to implement it and try steering all new information in a better way.
I think that this meant i needed that size when examining the old data to prevent truncation.

Anyway the tables above are examples i have more secondary tables and quite a few primary tables as well.
The basic concept i was thinking about if its better practice to have larger tables wich results in less querys / page (merging the sub/secondary tables as much as practically possible) or using smaller, slicker tables and more inpage querys/procs. For example using 1 table to link both contacts and groups and 1 table for perhaps phone/email and even links as long as one can reuse the fields not resulting in empty fields.

Shall check your links btw

|||

You could clean the data before importing it and I am concerned you are describing tables on the Calculus end where you have main table and the dimenssions on this end tables sizes should be similar. I know you can use validators but you can also use CHECK CONSTRAINT on you columns so people cannot insert whatever they like in name columns.

http://msdn2.microsoft.com/en-us/library/ms188258.aspx

Thursday, March 22, 2012

Best practices: changing values

What is the best way to design your tables in cases where field values change?

Example:
CREATE TABLE Product (ProductID INT, Description VARCHAR(32), Price SMALLMONEY...);

CREATE TABLE Purchase (PurchaseID INT, ProductID INT, Quantity INT);

Since price obviously change over time, I was wondering what the is the best table schema to use to reflect these changes, while still remembering previous price values (like for generating reports on previous sales...)

is it better to include a "Price SMALLMONEY" field in the purchases table (which kind of de-normalizes it) or is it better to have a separate ProductPrice table that keeps track of changing prices like so:

CREATE TABLE ProductPrice (ProductID INT, Price SMALLMONEY, CreationDate DATETIME...);

and have the Purchase table reference the ProductPrice table instead of the products table?

I have used both methods in the past, but I was wanted to get other peoples' take on it.

ThanksBecause price can change for many reasons, I always keep it in the actual transaction row. For example, you might have different prices for a given product based on quantity purchased (for example buying 100 units gets a price break). There might be reasons for different prices based on the customer (one price for wholesale, one for sub-contractors, another price for retail). These differences could be either discreet or cumulative. In short, the price in the inventory table might only be a starting point, the price in the transaction table is the authoritive price for a transaction.

-PatP|||If you want to be able to track historical prices, such as how much a price has changed over time, then you need to add a time dimension to your price table.
But for a financial application such as this there is no substitute to storing the actual price paid in the transation table.|||(which kind of de-normalizes it)

No, it doesn't. :) It's an attribute of the purchase.

The purchase table should have the price paid at time of purchase.

There should be a ProductPrice table that holds the price historically for each price. If you want to avoid duplicating data, you can put the ProductPriceID in the Purchase table so you have the exact price at the time purchase was made.

Best Practices, Create a Test DB Subset

Hello, we have a production DB that is 125 GB. Lots of tables. Is there an
automated way to extract say 10% of all those records? Thanks.Hi,
You have to write your own query to extract 10% data from each table.
Thanks
Hari
SQL Server MVP
"Pancho" <Pancho@.discussions.microsoft.com> wrote in message
news:7F1A5FC8-6431-49E0-812B-14ECEC8998CB@.microsoft.com...
> Hello, we have a production DB that is 125 GB. Lots of tables. Is there
> an
> automated way to extract say 10% of all those records? Thanks.|||That's going to be very time consuming as there are 100+ user tables. I was
hoping to perhaps create something more automated.
"Hari Pra" wrote:

> Hi,
> You have to write your own query to extract 10% data from each table.
> Thanks
> Hari
> SQL Server MVP
> "Pancho" <Pancho@.discussions.microsoft.com> wrote in message
> news:7F1A5FC8-6431-49E0-812B-14ECEC8998CB@.microsoft.com...
>
>|||Try something like this..
select
name, id
into
#TempTables
from
sysobjects
where
type = 'u'
declare @.CurrentID int
select @.CurrentID = min(id) from #TempTables
while (@.currentid is not null)
begin
-- Currently just selecting, but you could dynamically add new tables into
the DB from here.
declare @.sqltext nvarchar(2000)
set @.sqltext = N'select top 10 percent * from ' + (select name from
#TempTables where id = @.currentid)
exec sp_executesql @.sqltext
select @.CurrentID = min(id) from #TempTables where id > @.CurrentID
end
"Pancho" wrote:

> Hello, we have a production DB that is 125 GB. Lots of tables. Is there
an
> automated way to extract say 10% of all those records? Thanks.|||Pancho
See if this helps
EXEC sp_msforeachtable 'select top 10 percent * from ?'
"Pancho" <Pancho@.discussions.microsoft.com> wrote in message
news:D790FDC0-4F2D-452F-8E47-B73FAC4A9D95@.microsoft.com...
> That's going to be very time consuming as there are 100+ user tables. I
> was
> hoping to perhaps create something more automated.
> "Hari Pra" wrote:
>sql

Best practices to transfer data/tables from TEST to LIVE

HI all,
I'm definately a newbie and have what is probably a simple question
but I want to make sure I'm following good practices.
What is the best way to update a LIVE database with data being created
on a TEST/DEV database?
I want to send a few new records I created on a test environment onto
a live environment to a few different tables. It is only a couple
records and could probably manually do it in like 10 minutes. But I
assume eventually I'll have larger qualities of data to move and
tables to update and I don't want to forget to include soemthing.
What is the best way to handle this kind of stuff? Scripts? Any
other suggestions or different ideas?
Kind Regards,
Ray
You can achieve your goal scripting or using SSIS; scripting could seem the
simplest way for the job you described, but if you think that the same job
could be reused or subsequently implemented, you can build a package using
Business Intelligence Development Studio (BIDS). For more (really more!)
details, look at the Books on Line "SQL Server Integration Services (SSIS)"
chapter.
Gilberto Zampatti
"bcap" wrote:

> HI all,
> I'm definately a newbie and have what is probably a simple question
> but I want to make sure I'm following good practices.
> What is the best way to update a LIVE database with data being created
> on a TEST/DEV database?
> I want to send a few new records I created on a test environment onto
> a live environment to a few different tables. It is only a couple
> records and could probably manually do it in like 10 minutes. But I
> assume eventually I'll have larger qualities of data to move and
> tables to update and I don't want to forget to include soemthing.
> What is the best way to handle this kind of stuff? Scripts? Any
> other suggestions or different ideas?
> Kind Regards,
> Ray
>
|||On Jun 9, 3:24 am, Gilberto Zampatti
<GilbertoZampa...@.discussions.microsoft.com> wrote:
> You can achieve your goal scripting or using SSIS; scripting could seem the
> simplest way for the job you described, but if you think that the same job
> could be reused or subsequently implemented, you can build a package using
> Business Intelligence Development Studio (BIDS). For more (really more!)
> details, look at the Books on Line "SQL Server Integration Services (SSIS)"
> chapter.
> Gilberto Zampatti
>
> "bcap" wrote:
>
>
>
> - Show quoted text -
Thank you very much!

Best practices to transfer data/tables from TEST to LIVE

HI all,
I'm definately a newbie and have what is probably a simple question
but I want to make sure I'm following good practices.
What is the best way to update a LIVE database with data being created
on a TEST/DEV database?
I want to send a few new records I created on a test environment onto
a live environment to a few different tables. It is only a couple
records and could probably manually do it in like 10 minutes. But I
assume eventually I'll have larger qualities of data to move and
tables to update and I don't want to forget to include soemthing.
What is the best way to handle this kind of stuff? Scripts? Any
other suggestions or different ideas?
Kind Regards,
RayYou can achieve your goal scripting or using SSIS; scripting could seem the
simplest way for the job you described, but if you think that the same job
could be reused or subsequently implemented, you can build a package using
Business Intelligence Development Studio (BIDS). For more (really more!)
details, look at the Books on Line "SQL Server Integration Services (SSIS)"
chapter.
Gilberto Zampatti
"bcap" wrote:

> HI all,
> I'm definately a newbie and have what is probably a simple question
> but I want to make sure I'm following good practices.
> What is the best way to update a LIVE database with data being created
> on a TEST/DEV database?
> I want to send a few new records I created on a test environment onto
> a live environment to a few different tables. It is only a couple
> records and could probably manually do it in like 10 minutes. But I
> assume eventually I'll have larger qualities of data to move and
> tables to update and I don't want to forget to include soemthing.
> What is the best way to handle this kind of stuff? Scripts? Any
> other suggestions or different ideas?
> Kind Regards,
> Ray
>|||On Jun 9, 3:24 am, Gilberto Zampatti
<GilbertoZampa...@.discussions.microsoft.com> wrote:
> You can achieve your goal scripting or using SSIS; scripting could seem th
e
> simplest way for the job you described, but if you think that the same job
> could be reused or subsequently implemented, you can build a package using
> Business Intelligence Development Studio (BIDS). For more (really more!)
> details, look at the Books on Line "SQL Server Integration Services (SSIS)
"
> chapter.
> Gilberto Zampatti
>
> "bcap" wrote:
>
>
>
>
>
>
> - Show quoted text -
Thank you very much!

Best practices to transfer data/tables from TEST to LIVE

HI all,
I'm definately a newbie and have what is probably a simple question
but I want to make sure I'm following good practices.
What is the best way to update a LIVE database with data being created
on a TEST/DEV database?
I want to send a few new records I created on a test environment onto
a live environment to a few different tables. It is only a couple
records and could probably manually do it in like 10 minutes. But I
assume eventually I'll have larger qualities of data to move and
tables to update and I don't want to forget to include soemthing.
What is the best way to handle this kind of stuff? Scripts? Any
other suggestions or different ideas?
Kind Regards,
RayYou can achieve your goal scripting or using SSIS; scripting could seem the
simplest way for the job you described, but if you think that the same job
could be reused or subsequently implemented, you can build a package using
Business Intelligence Development Studio (BIDS). For more (really more!)
details, look at the Books on Line "SQL Server Integration Services (SSIS)"
chapter.
Gilberto Zampatti
"bcap" wrote:
> HI all,
> I'm definately a newbie and have what is probably a simple question
> but I want to make sure I'm following good practices.
> What is the best way to update a LIVE database with data being created
> on a TEST/DEV database?
> I want to send a few new records I created on a test environment onto
> a live environment to a few different tables. It is only a couple
> records and could probably manually do it in like 10 minutes. But I
> assume eventually I'll have larger qualities of data to move and
> tables to update and I don't want to forget to include soemthing.
> What is the best way to handle this kind of stuff? Scripts? Any
> other suggestions or different ideas?
> Kind Regards,
> Ray
>|||On Jun 9, 3:24 am, Gilberto Zampatti
<GilbertoZampa...@.discussions.microsoft.com> wrote:
> You can achieve your goal scripting or using SSIS; scripting could seem the
> simplest way for the job you described, but if you think that the same job
> could be reused or subsequently implemented, you can build a package using
> Business Intelligence Development Studio (BIDS). For more (really more!)
> details, look at the Books on Line "SQL Server Integration Services (SSIS)"
> chapter.
> Gilberto Zampatti
>
> "bcap" wrote:
> > HI all,
> > I'm definately a newbie and have what is probably a simple question
> > but I want to make sure I'm following good practices.
> > What is the best way to update a LIVE database with data being created
> > on a TEST/DEV database?
> > I want to send a few new records I created on a test environment onto
> > a live environment to a few different tables. It is only a couple
> > records and could probably manually do it in like 10 minutes. But I
> > assume eventually I'll have larger qualities of data to move and
> > tables to update and I don't want to forget to include soemthing.
> > What is the best way to handle this kind of stuff? Scripts? Any
> > other suggestions or different ideas?
> > Kind Regards,
> > Ray- Hide quoted text -
> - Show quoted text -
Thank you very much!

Best Practices Question?

Have a question about which method is the more accepted method.
I have two tables: table1 and table2.
They are joined by a primary and foreign key field:
table1
table1ID (Primary Key)
Room
NotAvailable (bit field: 1 will be not available and 0 will be available)
table2
table2ID (Primary Key)
table1ID (Foreign Key to table1)
DateUsed
There will only be records in table2 for rooms that are not available, so
when the two tables are inner joined, no matter how many records there are i
n
table1, the only records that will show up is what is matched in table2.
Question is: Is it good practice to use a field such as NotAvailable, to kno
w
that a room is not available, or use the results of the join to set a
NotAvailable property field in my code?
Thank you for any responses.
Note: I cannot use our company's actual field names, so disregard what the
names are and other ways to show a room as not available. Just want to show
the structure for the question I am asking.The problem with the NotAvailable field is that it requires modification
whenever there is an INSERT or Update in another table. And then the
question arises: "What exactly does NotAvailable mean since there is no time
period included". Is it NotAvailable today, tomorrow, next week, etc. That
provides opportunities for de-synching of the data -unless there is a
Trigger on the Table2.
Using a query joining the two tables (perhaps including a calendar table for
future dates) 'should' always provide an accurate presentation of data. (I'm
thinking of tables for Rooms and Reservations -therefore the Calendar table
is needed.)
It's not a 'Best Practice' to store data that is the result of some
manipulation of other data. But with careful planning, sometimes it has to
be done.
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Mike Collins" <MikeCollins@.discussions.microsoft.com> wrote in message
news:AA6DB4DC-1E7E-4007-AEDA-C8355D385A28@.microsoft.com...
> Have a question about which method is the more accepted method.
> I have two tables: table1 and table2.
> They are joined by a primary and foreign key field:
> table1
> table1ID (Primary Key)
> Room
> NotAvailable (bit field: 1 will be not available and 0 will be available)
> table2
> table2ID (Primary Key)
> table1ID (Foreign Key to table1)
> DateUsed
> There will only be records in table2 for rooms that are not available, so
> when the two tables are inner joined, no matter how many records there are
> in
> table1, the only records that will show up is what is matched in table2.
> Question is: Is it good practice to use a field such as NotAvailable, to
> know
> that a room is not available, or use the results of the join to set a
> NotAvailable property field in my code?
> Thank you for any responses.
> Note: I cannot use our company's actual field names, so disregard what the
> names are and other ways to show a room as not available. Just want to
> show
> the structure for the question I am asking.|||Mike Collins wrote:
> Have a question about which method is the more accepted method.
> I have two tables: table1 and table2.
> They are joined by a primary and foreign key field:
> table1
> table1ID (Primary Key)
> Room
> NotAvailable (bit field: 1 will be not available and 0 will be available)
> table2
> table2ID (Primary Key)
> table1ID (Foreign Key to table1)
> DateUsed
> There will only be records in table2 for rooms that are not available, so
> when the two tables are inner joined, no matter how many records there are
in
> table1, the only records that will show up is what is matched in table2.
> Question is: Is it good practice to use a field such as NotAvailable, to k
now
> that a room is not available, or use the results of the join to set a
> NotAvailable property field in my code?
> Thank you for any responses.
> Note: I cannot use our company's actual field names, so disregard what the
> names are and other ways to show a room as not available. Just want to sho
w
> the structure for the question I am asking.
You should rely on the data in table2 to determine if a room is
available. Using the bit field, you're exposing yourself to potentially
out-of-sync data, and you're duplicating the room status.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Thank you to both of you for your replies. That is the way I was thinking it
should be just needed a better way to explain it and get my point across.
"Mike Collins" wrote:

> Have a question about which method is the more accepted method.
> I have two tables: table1 and table2.
> They are joined by a primary and foreign key field:
> table1
> table1ID (Primary Key)
> Room
> NotAvailable (bit field: 1 will be not available and 0 will be available)
> table2
> table2ID (Primary Key)
> table1ID (Foreign Key to table1)
> DateUsed
> There will only be records in table2 for rooms that are not available, so
> when the two tables are inner joined, no matter how many records there are
in
> table1, the only records that will show up is what is matched in table2.
> Question is: Is it good practice to use a field such as NotAvailable, to k
now
> that a room is not available, or use the results of the join to set a
> NotAvailable property field in my code?
> Thank you for any responses.
> Note: I cannot use our company's actual field names, so disregard what the
> names are and other ways to show a room as not available. Just want to sho
w
> the structure for the question I am asking.

Best Practices Question?

Have a question about which method is the more accepted method.
I have two tables: table1 and table2.
They are joined by a primary and foreign key field:
table1
table1ID (Primary Key)
Room
NotAvailable (bit field: 1 will be not available and 0 will be available)
table2
table2ID (Primary Key)
table1ID (Foreign Key to table1)
DateUsed
There will only be records in table2 for rooms that are not available, so
when the two tables are inner joined, no matter how many records there are in
table1, the only records that will show up is what is matched in table2.
Question is: Is it good practice to use a field such as NotAvailable, to know
that a room is not available, or use the results of the join to set a
NotAvailable property field in my code?
Thank you for any responses.
Note: I cannot use our company's actual field names, so disregard what the
names are and other ways to show a room as not available. Just want to show
the structure for the question I am asking.The problem with the NotAvailable field is that it requires modification
whenever there is an INSERT or Update in another table. And then the
question arises: "What exactly does NotAvailable mean since there is no time
period included". Is it NotAvailable today, tomorrow, next week, etc. That
provides opportunities for de-synching of the data -unless there is a
Trigger on the Table2.
Using a query joining the two tables (perhaps including a calendar table for
future dates) 'should' always provide an accurate presentation of data. (I'm
thinking of tables for Rooms and Reservations -therefore the Calendar table
is needed.)
It's not a 'Best Practice' to store data that is the result of some
manipulation of other data. But with careful planning, sometimes it has to
be done.
--
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"Mike Collins" <MikeCollins@.discussions.microsoft.com> wrote in message
news:AA6DB4DC-1E7E-4007-AEDA-C8355D385A28@.microsoft.com...
> Have a question about which method is the more accepted method.
> I have two tables: table1 and table2.
> They are joined by a primary and foreign key field:
> table1
> table1ID (Primary Key)
> Room
> NotAvailable (bit field: 1 will be not available and 0 will be available)
> table2
> table2ID (Primary Key)
> table1ID (Foreign Key to table1)
> DateUsed
> There will only be records in table2 for rooms that are not available, so
> when the two tables are inner joined, no matter how many records there are
> in
> table1, the only records that will show up is what is matched in table2.
> Question is: Is it good practice to use a field such as NotAvailable, to
> know
> that a room is not available, or use the results of the join to set a
> NotAvailable property field in my code?
> Thank you for any responses.
> Note: I cannot use our company's actual field names, so disregard what the
> names are and other ways to show a room as not available. Just want to
> show
> the structure for the question I am asking.|||Mike Collins wrote:
> Have a question about which method is the more accepted method.
> I have two tables: table1 and table2.
> They are joined by a primary and foreign key field:
> table1
> table1ID (Primary Key)
> Room
> NotAvailable (bit field: 1 will be not available and 0 will be available)
> table2
> table2ID (Primary Key)
> table1ID (Foreign Key to table1)
> DateUsed
> There will only be records in table2 for rooms that are not available, so
> when the two tables are inner joined, no matter how many records there are in
> table1, the only records that will show up is what is matched in table2.
> Question is: Is it good practice to use a field such as NotAvailable, to know
> that a room is not available, or use the results of the join to set a
> NotAvailable property field in my code?
> Thank you for any responses.
> Note: I cannot use our company's actual field names, so disregard what the
> names are and other ways to show a room as not available. Just want to show
> the structure for the question I am asking.
You should rely on the data in table2 to determine if a room is
available. Using the bit field, you're exposing yourself to potentially
out-of-sync data, and you're duplicating the room status.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Thank you to both of you for your replies. That is the way I was thinking it
should be just needed a better way to explain it and get my point across.
"Mike Collins" wrote:
> Have a question about which method is the more accepted method.
> I have two tables: table1 and table2.
> They are joined by a primary and foreign key field:
> table1
> table1ID (Primary Key)
> Room
> NotAvailable (bit field: 1 will be not available and 0 will be available)
> table2
> table2ID (Primary Key)
> table1ID (Foreign Key to table1)
> DateUsed
> There will only be records in table2 for rooms that are not available, so
> when the two tables are inner joined, no matter how many records there are in
> table1, the only records that will show up is what is matched in table2.
> Question is: Is it good practice to use a field such as NotAvailable, to know
> that a room is not available, or use the results of the join to set a
> NotAvailable property field in my code?
> Thank you for any responses.
> Note: I cannot use our company's actual field names, so disregard what the
> names are and other ways to show a room as not available. Just want to show
> the structure for the question I am asking.sql

Best Practices Question: Create many common databases or many tables in one data

General Best Practices Question: Create many common data databases or many tables in one database? I am not a DBA!

In developing a web portal, ERP, or any large enterprise wide application made up of smaller task specific applications under the parent environment.

Should the design be to create a common data database and a specific database for each app or just put create one database and use best practices naming conventions? RI may or may not be a concern.

Areas of concern are: Maintenence, Security/Access, Performance, backup/restore, and file system and DB fragmentation benefits/hits.It depends on a whole host of questions. Will you need to do reporting across these applications? How will security need to be handled? What is the estimated size and growth of these seperate applicaitons?|||Originally posted by rhigdon
It depends on a whole host of questions. Will you need to do reporting across these applications? How will security need to be handled? What is the estimated size and growth of these seperate applicaitons?

I know my question is a broad one and it does bring up a whole host of questions. I can see many pros and cons for both approaches.

Reporting would be needed across all dbs or tables.
Security would be ADSI/NT Auth Mode.
Much of the apps data would grow very large (heavy I/O) and independently, while others would grow at a slower rate.|||What do you mean by "many common databases"? Do you mean duplicate schemas? I'd advise against that, unless you want rolling out upgrades, synchronizing lookup tables, and consolidating data for reporting to be a full-time job.

"Put all your eggs in one basket - and then WATCH THAT BASKET!"
-Mark Twain

blindman|||Originally posted by blindman
What do you mean by "many common databases"? Do you mean duplicate schemas? I'd advise against that, unless you want rolling out upgrades, synchronizing lookup tables, and consolidating data for reporting to be a full-time job.

"Put all your eggs in one basket - and then WATCH THAT BASKET!"
-Mark Twain

blindman

Databases that contain the data that is common to a specific application only and each then of those databases containing links to a database that contains data that is shared or common to all other apps/databases.|||If they are truly separate applications, then they should be separate databases.

Roll their data into a central data warehouse for analytical processing.

Maintain common data in central database and publish it to the clients to keep them synchronized. Each application should be stand-alone, and not rely on links to other databases if it can be helped.

This is going to be an ambitious project.

blindman

Best Practices Question for Outputting

Hey guys,
Little bit of a newbie question here...I have a database with about 20or so tables in a relational model. I am now working on an outputscheme and had a quick question regarding best practices foroutputting. Would it be best to
1) Set up a view that basically joins all of these tables together, then bind a DataSet/DataTable to it and output as needed?
2) Setup individual views for each table and run through them?
Thanks for the help!
e...
I've never liked creating do-everything views. You'll never get the same performance as you would by just creating individual stored procedures which join the tables you need to get the specific fields and records you need to fulfill each type of query or scenario you have. Unless you have a pretty simple site that doesn't do more than a couple very similar things, it's a lot of overhead that's not needed. Your other question: Why set up a view on one table? Unless you're doing a lot of calculated fields in the view that are derived from underlying fields in the table, that's a waste. Not knowing anything really about your situation, my stock advice is to create a stored procedure for every type of query you'll need. Add parameters as needed, but each proc should fulfill a specific need. Don't try to make a proc too general. They tend to get bigger and more confusing over time when they try to do too many different things.|||One very good thing about views is it reduces redundancies in yourprocedures. I'd personally make a few views of the most commontypes of joins you'd make. I've seen this as a problem with manydatabase driven sites and applications where one table change requiresyou to alter 30 stored procedures, and code for multiple pages. Alot of minor changes can remain minor if you consolidate alittle. It's extremely funny though when you have issues where afield name is spelled ammount. :)

Tuesday, March 20, 2012

Best Practices - Naming conventions

I work with a group of developers that insist their application's success depends on being able to create databases and tables on the fly. Their proposed naming convention is databases (job number) and for tables (job number "_" order number. I tried to explain this plan eludes any "normal" notion of database sanctity or sanity and giving out DDL Admin rights to a .Net component is a bad idea, all to no avail. Anyone out there with an opinion either way or am I taking my title (DBA) too seriously?To create databases and tables on the fly !!!!

Good idea! But who will be in charge for supporting this?
DBA or Developer? I guess anyone knows answer. On fly it is possible to create 'temporary' permanent objects - and remove them by using some rules. Use script for creating DB from application.|||It sounds to me like your developers don't understand the concept of scalability, or they wouldn't need to be creating database on the fly for different job numbers, or different tables for different orders. It sounds absurd, and rather amateur to me (you asked for opinions, right?).

Another 1 or 2 dimensions in their tables would handle it.

blindman|||Sounds like you just caught the SharePoint Team Services bug. Microsoft brags about a server with over 1,000 databases on it. Just tell the developers (and management) that you can not really be responsible for the performance on this machine, since you can not predict disk usage, or reallocate any files anywhere about, because they just keep coming back. If you have one disk fill up, then all you will get is errors from your application. God help you, if you catch a runaway transaction that blows your transaction logs beyond the bounds of normal space, too.

Just remember, just because Microsoft tells you you can do something, doesn't mean you should do it.|||Sound like MCrowley already "been there, done that" like I am in deep
caca with Sharepoint right now. They created DB "on the fly" like rabbit running high on Viagra... The DATA & LOG files default setup in the SQL Server's Properties won't apply to these Sharepoint DB creation (only effective when using GUI to create DB) . The only concept of scalability they can understand is getting higher & higher... :-)|||Sounds to me that your developers just re-invented the concept of partioned tables; however, not really clever. Temporary databases should not the way to do it! Could you get some more deeper into details of your developer's intentions? I'm sure we all here can propose much more cleverer ways to achieve the same.

Monday, March 19, 2012

Best Practice to update SQL Server Database Tables & Procedures

We will going to have no remote access to the SQL Server 2005â?¦ we as
developers can build and test in our place.
After successfully testing the codesâ?¦ web pages and SQL Server Database new
tables and stored procedures has to be carried physically in the USB Flash
disk and required to go to the host company server location and update the
web pages and SQL Server 2005 Database tables.
What is the best practice â?¦ if the situation is that we have to carry the
table and its data inside table physically to the host location and login to
the server and connect USB flash drive and update tables in SQL Server..
What is the best practice to perform update by going physical to the host
company for make SQL Server 2005 Database changes?> What is the best practice â?¦ if the situation is that we have to carry the
> table and its data inside table physically to the host location and login
> to
> the server and connect USB flash drive and update tables in SQL Server..
> What is the best practice to perform update by going physical to the host
> company for make SQL Server 2005 Database changes?
In both cases, the normal approach is to perform new installations and
upgrades using SQL scripts. You can use a tool like SQLCMD to execute the
scripts from a command file. For upgrades, it is important to test against
a production database replica to ensure the database is properly upgraded.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"TalalSaleem" <TalalSaleem@.discussions.microsoft.com> wrote in message
news:373C5072-E4D4-4F7C-A827-266DAA35C4E4@.microsoft.com...
> We will going to have no remote access to the SQL Server 2005â?¦ we as
> developers can build and test in our place.
> After successfully testing the codesâ?¦ web pages and SQL Server Database
> new
> tables and stored procedures has to be carried physically in the USB Flash
> disk and required to go to the host company server location and update the
> web pages and SQL Server 2005 Database tables.
> What is the best practice â?¦ if the situation is that we have to carry the
> table and its data inside table physically to the host location and login
> to
> the server and connect USB flash drive and update tables in SQL Server..
> What is the best practice to perform update by going physical to the host
> company for make SQL Server 2005 Database changes?|||On Tue, 15 Jan 2008 07:04:25 -0600, "Dan Guzman"
<guzmanda@.nospam-online.sbcglobal.net> wrote:
>> What is the best practice ? if the situation is that we have to carry the
>> table and its data inside table physically to the host location and login
>> to
>> the server and connect USB flash drive and update tables in SQL Server..
>> What is the best practice to perform update by going physical to the host
>> company for make SQL Server 2005 Database changes?
>In both cases, the normal approach is to perform new installations and
>upgrades using SQL scripts. You can use a tool like SQLCMD to execute the
>scripts from a command file. For upgrades, it is important to test against
>a production database replica to ensure the database is properly upgraded.
I think he's asking more about data. Say you need to send someone 1gb
of data, to populate a table, to update a database, etc.
I'd say you can use a good old ASCII CSV or flat file, but of course
you need some kind of import logic, typically some staging tables and
an SSIS package, to do the work.
Josh

Best Practice to update SQL Server Database Tables & Procedures

We will going to have no remote access to the SQL Server 2005… we as
developers can build and test in our place.
After successfully testing the codes… web pages and SQL Server Database new
tables and stored procedures has to be carried physically in the USB Flash
disk and required to go to the host company server location and update the
web pages and SQL Server 2005 Database tables.
What is the best practice … if the situation is that we have to carry the
table and its data inside table physically to the host location and login to
the server and connect USB flash drive and update tables in SQL Server..
What is the best practice to perform update by going physical to the host
company for make SQL Server 2005 Database changes?
> What is the best practice … if the situation is that we have to carry the
> table and its data inside table physically to the host location and login
> to
> the server and connect USB flash drive and update tables in SQL Server..
> What is the best practice to perform update by going physical to the host
> company for make SQL Server 2005 Database changes?
In both cases, the normal approach is to perform new installations and
upgrades using SQL scripts. You can use a tool like SQLCMD to execute the
scripts from a command file. For upgrades, it is important to test against
a production database replica to ensure the database is properly upgraded.
Hope this helps.
Dan Guzman
SQL Server MVP
"TalalSaleem" <TalalSaleem@.discussions.microsoft.com> wrote in message
news:373C5072-E4D4-4F7C-A827-266DAA35C4E4@.microsoft.com...
> We will going to have no remote access to the SQL Server 2005… we as
> developers can build and test in our place.
> After successfully testing the codes… web pages and SQL Server Database
> new
> tables and stored procedures has to be carried physically in the USB Flash
> disk and required to go to the host company server location and update the
> web pages and SQL Server 2005 Database tables.
> What is the best practice … if the situation is that we have to carry the
> table and its data inside table physically to the host location and login
> to
> the server and connect USB flash drive and update tables in SQL Server..
> What is the best practice to perform update by going physical to the host
> company for make SQL Server 2005 Database changes?
|||On Tue, 15 Jan 2008 07:04:25 -0600, "Dan Guzman"
<guzmanda@.nospam-online.sbcglobal.net> wrote:

>In both cases, the normal approach is to perform new installations and
>upgrades using SQL scripts. You can use a tool like SQLCMD to execute the
>scripts from a command file. For upgrades, it is important to test against
>a production database replica to ensure the database is properly upgraded.
I think he's asking more about data. Say you need to send someone 1gb
of data, to populate a table, to update a database, etc.
I'd say you can use a good old ASCII CSV or flat file, but of course
you need some kind of import logic, typically some staging tables and
an SSIS package, to do the work.
Josh

Sunday, March 11, 2012

Best practice for indexed views and aggregates tables

Hi, all experts here,

Thank you very much for your kind attention.

I am having some questions on indexed views and aggregate tables.

My question is: To improve the performance of the queries, is it better to use indexted views or aggregates tables for those aggregates which are often queried?

I am looking forward to hearing from you.

Thank you very much in advance for your help.

With best regards,

Yours sincerely,

Well...it depends. Indexed views are great because they keep the data from a view materialized...but the database engine must maintain the indexes, which may slow OLTP operations. An aggregate table may be a good idea, but also might be complicated to design and make sure it is updated properly, not to mention that you have to make sure it is updated.
Tim|||

Hi, Tim,

Thank you very much for your kind advices.

In my case, the aggregates residing in a historical data warehouse, where the update of data is not so much likely and it will not have much to do with any operational database. As what you advised, indexed views would be a better idea in this case?

Looking forward to hearing from you.

With best regards,

Yours sincerely,

|||If the data isn't getting updated often, then I personally would design an aggregates table. That will give you a lot of flexibility for your project...flexibility that may be prohibited by the restrictions on designing indexed views (there are a lot of things you need to adhere to so that your view can be indexed). Hope this helps.|||

Hi, Tim,

Thanks a lot for your advices. They've been very helpful.

With best regards,

Yours sincerely,

Best practice for handling XML schema hierarchies?

I have a number of tables with columns of xml datatype. Each of these columns are typed against a different XML schema collection. However, each of the XML schema collections contain a hierarchy of schema definitions - and the schemas towards the top of the hierarchy are used by a number of different XML schema collections. I want to define the schemas in such a way that if I need to change a schema towards the top of the hierarchy, I only need to change it in one place.

I understand that it is not possible to reference a schema in one collection from another - is my understanding correct? (If I am wrong, then please disregard the following)

If the schemas must be duplicated in each xml schema collection that needs them, then I am considering the following approach. Are there any better methods available?

- Create a 'reference' xml schema collection that contains all the schemas
- Create a table that relates a schema to all the collections that need it
- Write a stored proc that updates all the individual collections appropriately when the reference collection is updated

Using this method, I would anticipate updating the 'reference' schema and running the stored procedure at a quiet time

I could just use a single collection for everything - but although it would ensure that the contents of a column satisfied a schema - it wouldn't check that it satisfied the correct schema. I guess I could put separate validation on the column to ensure that the right contents had been added, but this seems to run counter to the whole idea of using the XML collections

Any thoughts?

You are correct that you cannot refer to schemas from other schema collections.

You approach of using a master schema collection sounds ok. Alternatively, you could use a special table that contains each one of the schemas in an XML datatype column. That way, you could even perform updates on the schemas programmatically, and you would preserve annotations and comments in the schema as an added bonus.

You then could still do the stored procs.

Note however, that you have to be careful with evolving your schemas in that they should only be gaining new elements and types. Otherwise the schema collection will disallow such updates since the cost of revalidation and potential validation failure of old data was too high for be done implicitly...

Best regards

Michael

|||Thanks Michael - will give it a go

Thursday, March 8, 2012

Best Practice - Lookup or SQL from Variable?

Hi,

I am pulling data from FoxPro tables into SQL 2005, and want to only pull new or changed rows. Accordingly each table in Fox has a column LastChangedDateTime, indicating the last time the row was updated, and I have a table in SQL which has one row per Fox table, listing the table name and the most recent data pulled into SQL.

In 2000 DTS I would have pulled the SQL datetime value into a package variable, then used a parameterized SQL statement with ".. WHERE LastChangedDateTime > ? " to select the rows I require.

In SSIS this approach does not seem to be possible, and the options are that I either use a variable for the entire SQL statement or, as the first SSIS tutorial suggests, use a lookup against the SQL table.

Gut feel is that the lookup will perform slower than creating the variable SQL and executing that (given that the source table is 13 million rows and rising, and I only want the last 100,000 or so from today).

What is considered best practice under these circumstances?

Also is it possible to write SSIS scripts in C# rather than VB.NET, as the syntax differences are driving me mad? ;-)

Thanks in advance,

Richard R

I would go with the DTS style method, it should work just fine. An Exec SQL Task can get the date value and store it in a variable. The variable can then be used in a parameterised query, in the same way as you did with DTS, but obviously using a Data Flow task, and the correct source. Saying that I have not tried it with FoxPro, but you will be using the same OLE-DB driver I assume so it should work fine. Parameter support is available in the OLE-DB Source, and the driver should support it if it did in DTS.

Using a lookup would not make sense as you will be doing far more work.
Using a variable for the command (with EvaluateAsExpression = True) is also perfectly valid, and sometimes the better choice, but for a simple query like this and since you have parameter support, I'd go with the former method, but there is nothing in it really.


The Script Task and Script Component both use Visual Studio for Applications (VSA), which means you get the power of .Net rather than a interpreted script language. Unfortunately VSA has only been implemented for VB.Net, there is no C# support. No idea if or when there will be either, but you are certainly not the first to raise the issue.

|||

Thanks Darren,

Sometimes it's good to check out a gut feeling - just in case the whole underlying system architecture has changed.

I'm not sure the FoxPro v9 driver OLEDB actually has parameter support, it didn't seem to work when I tried it, hence the original post. This is the first time I've had to interface to FoxPro, and there are definitely a few oddities about the process...

Regards,

Richard

Best practice

Hi
What is the prefered practice to use when I have 2 or more related
tables and I want to delete a row in the master table and I want the
child tables to automatically delete their related rows, Should I use
triggers in the database, enable cascade delete in the dataset or
somthing else ?
I use visual studio 2005 and sql server 2005.
Thanks
RolfIt depends. If you want the child records deleted, then a cascade delete is
appropriate. If you want the application to deal with the child records
first, then a foreign key constraint that will stop deletes will be valid.
Both are best practices, depending on your application requirements.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
<rolf-hje@.online.no> wrote in message
news:1139864939.431879.138190@.z14g2000cwz.googlegroups.com...
> Hi
> What is the prefered practice to use when I have 2 or more related
> tables and I want to delete a row in the master table and I want the
> child tables to automatically delete their related rows, Should I use
> triggers in the database, enable cascade delete in the dataset or
> somthing else ?
> I use visual studio 2005 and sql server 2005.
> Thanks
> Rolf
>|||Cascade with error handling could be one way as you mention.
Another way wich I prefer is to use transaction when deleting data that
needs to delete related data.
I do recommend that you dont use triggers since that easily can make
things go out of control.|||It depends. Cascading deletes are the simplest to implement, but can
complicate deadlock minimization because the order in which locks are
obtained across tables is not clearly defined. In addition, INSTEAD OF
triggers cannot exist on the referencing table of a cascading referential
action. Using FOR or AFTER triggers to cascade deletes is generally a bad
idea--not because of the performance impact, but because several triggers
can exist for an action on a table and the order in which they are executed
is not deterministic, IMO they should be avoided. I prefer to perform
deletes within a transaction in a stored procedure. It is then clear from
reading the text of the proc which tables will be affected, and it's easier
to control the order in which locks are obtained to minimize the likelyhood
of a deadlock. The use of INSTEAD OF DELETE triggers instead of stored
procedures may warrant investigation because any locks applied depend on the
order in which statements appear within the trigger (none are applied as a
direct result of the trigger firing) in the same way as statements within a
stored procedure, and unlike the stored procedure method, they do not
require preventing direct access to the tables. (On the other hand, many
would say that you should always prevent direct access to the tables and
require all modifications to be performed using stored procedures.)
<rolf-hje@.online.no> wrote in message
news:1139864939.431879.138190@.z14g2000cwz.googlegroups.com...
> Hi
> What is the prefered practice to use when I have 2 or more related
> tables and I want to delete a row in the master table and I want the
> child tables to automatically delete their related rows, Should I use
> triggers in the database, enable cascade delete in the dataset or
> somthing else ?
> I use visual studio 2005 and sql server 2005.
> Thanks
> Rolf
>|||OK, thanks for all replies
I think I will avvoid triggers and use cascading delete. But what is
more efficent. Cascading deletes in the database or cascading deletes
in the dataset. Is there a performance difference between these two
options ?
Thanks
Rolf

Best practice

Hi,
I'm writing an application using a sql server database. In this database
there roughly 3 kinds of tables:
1. tables containing data which is generated during the run of an
application
2. tables containing data entered by our clients
3. tables containing data entered by our compagny.
Twice a year our clients need to receive new data in table group 3. Thereby
the data stored in table group 1 may be removed.
Only the data in group 2, entered by our clients should stay in the
database.
What's the best practice to achieve this goal?
I was thinking in splitting it up in 2 databases, 1 with group 1 tables, and
1 with group 2 and 3 tables. Twice a year we create a backup of this second
database en our clients restore this database. After this the clients need
to run a program to check the integrity.
Is this the way to go?
Thank's,
PerryOn Tue, 11 Apr 2006 15:36:42 +0200, Perry van Kuppeveld wrote:

>Hi,
>I'm writing an application using a sql server database. In this database
>there roughly 3 kinds of tables:
>1. tables containing data which is generated during the run of an
>application
>2. tables containing data entered by our clients
>3. tables containing data entered by our compagny.
>Twice a year our clients need to receive new data in table group 3. Thereby
>the data stored in table group 1 may be removed.
>Only the data in group 2, entered by our clients should stay in the
>database.
>What's the best practice to achieve this goal?
>I was thinking in splitting it up in 2 databases, 1 with group 1 tables, an
d
>1 with group 2 and 3 tables. Twice a year we create a backup of this second
>database en our clients restore this database. After this the clients need
>to run a program to check the integrity.
>Is this the way to go?
Hi Perry,
I'd prefer to keep all data in just one database. That makes it much
easier to maintain integrity (FOREIGN KEY constraints don't work
coorss-database), plus it will probably yield better performance.
For your half-yearly update of the company-supplied tables, I'd
recommend that you distribute a script to your customers. The script
would either be a .SQL script with INSERT, UPDATE and DELETE statements,
or a .CMD file with a series of SQLCMD and BCP statements, plus the
files to be used in the bcp operations.
Hugo Kornelis, SQL Server MVP|||like Hugo said, one database for sure.
staging tables to import/export data. good names for all tables.
stored procedures to load/run the data in and out.
you need to make a backup before and after all the data movements.

Wednesday, March 7, 2012

Best performance

What is the best thing to do to get the best performance ?
I have a database that contain 600 tables. Those table are reached by 10 di
fferents applications that we developped. I want to know if it's better for
performance to spread those 600 tables in 10 databases or leave those 600 t
ables in one database. Som
e tables are accessed by multiples applications but most of the tables are o
nly for one application.Cris,
This is a nearly impossible to answer correctly question. But...
1. Optimizer seems to be smartest when working within a single database.
Constraints are only within a database.
2. 600 tables is not that many. Some ERP systems have _well_ over 15,000
tables and run just fine.
3. If you have I/O problems, spreading the files (and each db has its own
files) across separate spindles can give you improvement. (But you have to
have a lot of activity before this really becomes worth doing.)
I would break up my tables or keep them together more by how I logically
view the systems and how I administer them, rather that use performance as
the first criteria.
Russell Fields
"Cris" <anonymous@.discussions.microsoft.com> wrote in message
news:3AB917FA-67DC-4C55-BD90-285A412C7F49@.microsoft.com...
> What is the best thing to do to get the best performance ?
> I have a database that contain 600 tables. Those table are reached by 10
differents applications that we developped. I want to know if it's better
for performance to spread those 600 tables in 10 databases or leave those
600 tables in one database. Some tables are accessed by multiples
applications but most of the tables are only for one application.
>|||Hi,
Incase if you have multiple disk controllers then you can create file
groups. SQL Server can improve the performance by controlling
the placement of data and indexes onto specific disk drives. This will
reduce the I/O and will give more performance.
One more option is Create 2 databases or go for 1 more server for reporting
(Incase you have quite a lot of report generation).
1. Make the reporting database sync using Transactional Replication or
Logshipping and keep read only.
This will reduce the query retrieval in the online datatabase and your OLTP
will be much faster.
Thanks
Hari
MCDBA
"Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
news:uxNEjYHCEHA.580@.TK2MSFTNGP11.phx.gbl...
> Cris,
> This is a nearly impossible to answer correctly question. But...
> 1. Optimizer seems to be smartest when working within a single database.
> Constraints are only within a database.
> 2. 600 tables is not that many. Some ERP systems have _well_ over 15,000
> tables and run just fine.
> 3. If you have I/O problems, spreading the files (and each db has its own
> files) across separate spindles can give you improvement. (But you have
to
> have a lot of activity before this really becomes worth doing.)
> I would break up my tables or keep them together more by how I logically
> view the systems and how I administer them, rather that use performance as
> the first criteria.
> Russell Fields
> "Cris" <anonymous@.discussions.microsoft.com> wrote in message
> news:3AB917FA-67DC-4C55-BD90-285A412C7F49@.microsoft.com...
10
> differents applications that we developped. I want to know if it's better
> for performance to spread those 600 tables in 10 databases or leave those
> 600 tables in one database. Some tables are accessed by multiples
> applications but most of the tables are only for one application.
>

Best Method to update table...

Hy everyone.
I've got a little question regarding the speed of an update query...

situation:
I've got different tables containing information wich i want to add to one big table trough a schedule (or as fast as possible).

Bigtable size:
est. 180000 records with 25 fields (most varchar).

Currently I've tried two different methods:
delete all rows in the big table and add the ones from the little tables again. (trough union all query)
-> Speed ~ 15 Seconds

refresh all changed rows (trough timestamp <>) and add new titles (trough union all query)
-> Speed ~ 20 Seconds

Does anybody know a faster solution? The union queries block the table for those 20 Seconds...

Thanks for any reply!RE: situation: I've got different tables containing information which i want to add to one big table trough a schedule (or as fast as possible).
Bigtable size:
est. 180000 records with 25 fields (most varchar).

Currently I've tried two different methods:
delete all rows in the big table and add the ones from the little tables again. (trough union all query)
-> Speed ~ 15 Seconds

refresh all changed rows (trough timestamp <>) and add new titles (trough union all query)
-> Speed ~ 20 Seconds

Q1 Does anybody know a faster solution? The union queries block the table for those 20 Seconds... Thanks for any reply!

A1 Maybe.

As with many things, it depends on the requirements. For example, some possible considerations may include various permutations and combinations of any of the following: (not an exhaustive list)
a using a lower isolation level for the union queries, and conditionally unioning only updated tables
b implementing triggers to update the target as dml is commited at the source tables
c a create, populate, and rename table scheme (dropping the old table)