Showing posts with label practices. Show all posts
Showing posts with label practices. Show all posts

Tuesday, March 27, 2012

Best techniques for databse creation ?

Dear Experts,
Any one knows any link regarding best practices for schema and database crea
tion in SQL server 2000 in detail .
Faheem Latif
NETWORK SOLUTIONFaheem,
What specifically are you looking for? There are many things on the web if
you search google. However I recommend these links as a start:
SQL Server 2000 Operations Guide
http://www.microsoft.com/technet/pr...in/sqlops0.mspx
Vyas's article on Best Practices
http://www.sql-server-performance.c...t_practices.asp
Microsoft SQL Server 2000 Best Practices Analyzer 1.0 Beta
E07339C1F22&displaylang=en" target="_blank">http://www.microsoft.com/downloads/...&displaylang=en
Tinyurl: http://tinyurl.com/upzi
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
"Faheem" <anonymous@.discussions.microsoft.com> wrote in message
news:189C9EC5-9492-4B34-A0D7-C9908D15BA0E@.microsoft.com...
> Dear Experts,
> Any one knows any link regarding best practices for schema and database
creation in SQL server 2000 in detail .
> Faheem Latif
> NETWORK SOLUTION
>

Best techniques for databse creation ?

Dear Experts
Any one knows any link regarding best practices for schema and database creation in SQL server 2000 in detail
Faheem Latif
NETWORK SOLUTIONFaheem,
What specifically are you looking for? There are many things on the web if
you search google. However I recommend these links as a start:
SQL Server 2000 Operations Guide
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops0.mspx
Vyas's article on Best Practices
http://www.sql-server-performance.com/vk_sql_best_practices.asp
Microsoft SQL Server 2000 Best Practices Analyzer 1.0 Beta
http://www.microsoft.com/downloads/details.aspx?FamilyId=B352EB1F-D3CA-44EE-893E-9E07339C1F22&displaylang=en
Tinyurl: http://tinyurl.com/upzi
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
"Faheem" <anonymous@.discussions.microsoft.com> wrote in message
news:189C9EC5-9492-4B34-A0D7-C9908D15BA0E@.microsoft.com...
> Dear Experts,
> Any one knows any link regarding best practices for schema and database
creation in SQL server 2000 in detail .
> Faheem Latif
> NETWORK SOLUTION
>

Best techniques for databse creation ?

Dear Experts,
Any one knows any link regarding best practices for schema and database creation in SQL server 2000 in detail .
Faheem Latif
NETWORK SOLUTION
Faheem,
What specifically are you looking for? There are many things on the web if
you search google. However I recommend these links as a start:
SQL Server 2000 Operations Guide
http://www.microsoft.com/technet/pro...n/sqlops0.mspx
Vyas's article on Best Practices
http://www.sql-server-performance.co..._practices.asp
Microsoft SQL Server 2000 Best Practices Analyzer 1.0 Beta
http://www.microsoft.com/downloads/d...displaylang=en
Tinyurl: http://tinyurl.com/upzi
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
"Faheem" <anonymous@.discussions.microsoft.com> wrote in message
news:189C9EC5-9492-4B34-A0D7-C9908D15BA0E@.microsoft.com...
> Dear Experts,
> Any one knows any link regarding best practices for schema and database
creation in SQL server 2000 in detail .
> Faheem Latif
> NETWORK SOLUTION
>

Sunday, March 25, 2012

Best Practice-working with SQL Express Database

Hi,

I am new to working with Database using .net C#. i will really appreciate if someone can point me to a link where i can find best practices of working with sql express database with my c# web service applications.

preferably i would like to have answers for:

- to have one db or to have one for testing and one real db?

- db security

- use windows authentication or db user authentication

etc.

Best practices for SqlExpress will depend somewhat on your environment/requirements. Since you are developing a web service, I suggest asking the question in one of the asp.net forums (forums.asp.net).

SqlExpress has it's own forum here on the MSDN forums. I'll move the thread over there for more general info on Express best practices.sql

Best Practices?

Hi,

I am installing SQL 2005 in a cluster env and there is a section which asks you about the domina\group for each of the accounts. I am going to have 3 virtual server so does it mean I should have 3*3 structure.

Can anyone tell me what exactly the group does and can there be 3 groups with 9 accounts.

What is the best practices
Thanks

I would just create one Windows Group with three different user accounts, then use one account for all of the services on each instance.

Best Practices?

Hi,

I am installing SQL 2005 in a cluster env and there is a section which asks you about the domina\group for each of the accounts. I am going to have 3 virtual server so does it mean I should have 3*3 structure.

Can anyone tell me what exactly the group does and can there be 3 groups with 9 accounts.

What is the best practices
Thanks

I would just create one Windows Group with three different user accounts, then use one account for all of the services on each instance.

Best Practices: Should other (unrelated) applications be installed on a SQL server?

I was taught long ago that database servers should be database servers
and nothing more. I have been asked by my current employer to install
a new application on our SQL server because "we aren't using all of
it." In other words, they feel the server is under-utilized and want
to run other applications on it. I told them this was a bad idea and
that we should identify another server for the installation. That was
rejected and I've been challenged to "prove" that best practices are
to avoid loading unrelated applications on a database server.
Can anyone point me to a statement from Microsoft or some DB authority
that says as much? I have located dozens of bloggers who agree with
me but I can't really cite "CyberDawg420" as a reference when making
my argument. Any help at all is greatly appreciated!
You seem to be trying to "prove" common sense.
How about this. Don't share unless there is absolutely no choice.
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
<jimguytrucker@.yahoo.com> wrote in message
news:1194452329.877280.250570@.v3g2000hsg.googlegro ups.com...
>I was taught long ago that database servers should be database servers
> and nothing more. I have been asked by my current employer to install
> a new application on our SQL server because "we aren't using all of
> it." In other words, they feel the server is under-utilized and want
> to run other applications on it. I told them this was a bad idea and
> that we should identify another server for the installation. That was
> rejected and I've been challenged to "prove" that best practices are
> to avoid loading unrelated applications on a database server.
> Can anyone point me to a statement from Microsoft or some DB authority
> that says as much? I have located dozens of bloggers who agree with
> me but I can't really cite "CyberDawg420" as a reference when making
> my argument. Any help at all is greatly appreciated!
>
|||On Nov 7, 10:38 am, "Geoff N. Hiten" <SQLCrafts...@.gmail.com> wrote:
> You seem to be trying to "prove" common sense.
> How about this. Don't share unless there is absolutely no choice.
> --
> Geoff N. Hiten
> Senior SQL Infrastructure Consultant
> Microsoft SQL Server MVP
> <jimguytruc...@.yahoo.com> wrote in message
> news:1194452329.877280.250570@.v3g2000hsg.googlegro ups.com...
>
>
> - Show quoted text -
You are spot on with the "common sense" comment. I'm frustrated
because it is obvious to me but they want "evidence." The only way to
PROVE I'm right is to do what they ask, slow down the SQL server, and
create numerous headaches. I would be willing to do this to make my
point but I'm the one who would have to clean up the aftermath.

Best Practices: Should other (unrelated) applications be installed on a SQL server?

I was taught long ago that database servers should be database servers
and nothing more. I have been asked by my current employer to install
a new application on our SQL server because "we aren't using all of
it." In other words, they feel the server is under-utilized and want
to run other applications on it. I told them this was a bad idea and
that we should identify another server for the installation. That was
rejected and I've been challenged to "prove" that best practices are
to avoid loading unrelated applications on a database server.
Can anyone point me to a statement from Microsoft or some DB authority
that says as much? I have located dozens of bloggers who agree with
me but I can't really cite "CyberDawg420" as a reference when making
my argument. Any help at all is greatly appreciated!You seem to be trying to "prove" common sense.
How about this. Don't share unless there is absolutely no choice.
--
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
<jimguytrucker@.yahoo.com> wrote in message
news:1194452329.877280.250570@.v3g2000hsg.googlegroups.com...
>I was taught long ago that database servers should be database servers
> and nothing more. I have been asked by my current employer to install
> a new application on our SQL server because "we aren't using all of
> it." In other words, they feel the server is under-utilized and want
> to run other applications on it. I told them this was a bad idea and
> that we should identify another server for the installation. That was
> rejected and I've been challenged to "prove" that best practices are
> to avoid loading unrelated applications on a database server.
> Can anyone point me to a statement from Microsoft or some DB authority
> that says as much? I have located dozens of bloggers who agree with
> me but I can't really cite "CyberDawg420" as a reference when making
> my argument. Any help at all is greatly appreciated!
>|||On Nov 7, 10:38 am, "Geoff N. Hiten" <SQLCrafts...@.gmail.com> wrote:
> You seem to be trying to "prove" common sense.
> How about this. Don't share unless there is absolutely no choice.
> --
> Geoff N. Hiten
> Senior SQL Infrastructure Consultant
> Microsoft SQL Server MVP
> <jimguytruc...@.yahoo.com> wrote in message
> news:1194452329.877280.250570@.v3g2000hsg.googlegroups.com...
>
> >I was taught long ago that database servers should be database servers
> > and nothing more. I have been asked by my current employer to install
> > a new application on our SQL server because "we aren't using all of
> > it." In other words, they feel the server is under-utilized and want
> > to run other applications on it. I told them this was a bad idea and
> > that we should identify another server for the installation. That was
> > rejected and I've been challenged to "prove" that best practices are
> > to avoid loading unrelated applications on a database server.
> > Can anyone point me to a statement from Microsoft or some DB authority
> > that says as much? I have located dozens of bloggers who agree with
> > me but I can't really cite "CyberDawg420" as a reference when making
> > my argument. Any help at all is greatly appreciated!- Hide quoted text -
> - Show quoted text -
You are spot on with the "common sense" comment. I'm frustrated
because it is obvious to me but they want "evidence." The only way to
PROVE I'm right is to do what they ask, slow down the SQL server, and
create numerous headaches. I would be willing to do this to make my
point but I'm the one who would have to clean up the aftermath.

Thursday, March 22, 2012

Best Practices: Should other (unrelated) applications be installed on a SQL server?

I was taught long ago that database servers should be database servers
and nothing more. I have been asked by my current employer to install
a new application on our SQL server because "we aren't using all of
it." In other words, they feel the server is under-utilized and want
to run other applications on it. I told them this was a bad idea and
that we should identify another server for the installation. That was
rejected and I've been challenged to "prove" that best practices are
to avoid loading unrelated applications on a database server.
Can anyone point me to a statement from Microsoft or some DB authority
that says as much? I have located dozens of bloggers who agree with
me but I can't really cite "CyberDawg420" as a reference when making
my argument. Any help at all is greatly appreciated!You seem to be trying to "prove" common sense.
How about this. Don't share unless there is absolutely no choice.
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
<jimguytrucker@.yahoo.com> wrote in message
news:1194452329.877280.250570@.v3g2000hsg.googlegroups.com...
>I was taught long ago that database servers should be database servers
> and nothing more. I have been asked by my current employer to install
> a new application on our SQL server because "we aren't using all of
> it." In other words, they feel the server is under-utilized and want
> to run other applications on it. I told them this was a bad idea and
> that we should identify another server for the installation. That was
> rejected and I've been challenged to "prove" that best practices are
> to avoid loading unrelated applications on a database server.
> Can anyone point me to a statement from Microsoft or some DB authority
> that says as much? I have located dozens of bloggers who agree with
> me but I can't really cite "CyberDawg420" as a reference when making
> my argument. Any help at all is greatly appreciated!
>sql

Best Practices: Recording Error Information (Beyond ErrorCode and ErrorColumn)

I'm sorry for asking this, as I'm sure tha the answer is in one of the 208 matches I found searching on "ErrorCode". Unfortunately, this project is overdue and I need a solution fairly soon.

I should add that I've only been developing in SSIS since August 2007.

I have a complicated package, loading about 17 outputs of the XML Source into staging tables. I have been using the error outputs of any standard components I use, out of faith that, if Microsoft provided them, then they should be useful for something. I've been directing all of the error outputs for one of the 17 "tables" from the XML Source into a Union All, and then into an "Error staging table", for each of the 17 outputs. This table includes all of the possible columns, but permits nulls for all of them. It also incldues the ErrorCode and ErrorColumn.

Unfortunately, if these latter two are useful for anyting, I haven't found it yet.

Right now, working with our first "mostly real" data, I'm getting 100% of my input rows written into error tables. Unfortunately, the information in the tables are of limited use in determining what went wrong. For instance, the ErrorColumn seems only to be populated if there was a specific error with a specific column. The Lookup component, doesn't seem to populate ErrorColumn, even if only one column was used for the lookup! No information about the component producing the error output is supplied in the error output, either, which makes it difficult to determine which of the five or so possible error outputs is the one that produced the particular error row.

This proves that I'm missing something simple. How do people handle errors? In my custom components, I learned to use the Fire* methods to produce detailed messages before redirecting the row to the error output, but this sort of thing is not available through the standard components.

Do I really have to create a separate "add useful information to the error output" component, and use it on each error output?

Sorry for the attitude, but just when I think I'm winning, SSIS brings me back down to earth!

If you want to identify which component produced an error then simply hook a derived column component up to the error path that adds the name of the component producing the error to the data path. Thats that one taken care of!

"Do I really have to create a separate "add useful information to the error output" component, and use it on each error output?"

Whats the problem with doing that? Granted you could argue that this information should be included anyway but the rationale for it NOT being there is that it could potentially take up A LOT of memory - a rationale that I agree with. hence you have the opportunity to do it yourself.

Note that if youre using a UNION ALL to collate all of your error rows into one place then you only need to do this once.

-Jamie

|||

Jamie Thomson wrote:

If you want to identify which component produced an error then simply hook a derived column component up to the error path that adds the name of the component producing the error to the data path. Thats that one taken care of!

"Do I really have to create a separate "add useful information to the error output" component, and use it on each error output?"

Whats the problem with doing that?

The main problem is that I don't know how!

The second problem is that there are about 50 or so places I'd have to add one of these components.

In particular, how can a downstream component learn about the component next-upstream? I also don't know how I'd "do it only once" downstream of the Union All.

|||

I've implemented a Derived Column transform for the first few of my error outputs, and it works well enough.

My next issue is that I have two custom components that are very heavily used. They produce good error messages through the Fire* methods. I now would like to take the messages they produce and place them into my new ErrorMessage column. I added a dummy Derived Column transform to the start of the chain of transforms in order to make the ErrorMesage column available with the correct data type and size throughout the transform chain.

The problem is that both of my components do nonstandard things with their collection of input columns, so I can't just add ErrorMessage as an input column. I know how to get the buffer index for an input column and how to use it to get values from the buffer for the column, and how to set them into the buffer.

My question is: can I get a buffer index for a column not in my input columns collection and use that index to set the value of that column? I see that there's a Virtual Input and Virtual Input Column Collection, but I haven't seen examples of how to use it.

This does sound like a hack, and I promise to fix it once I get the leisure to write a UI for these components so they don't have to do strange things with their input column collections.

Thanks.|||

John Saunders wrote:

Jamie Thomson wrote:

If you want to identify which component produced an error then simply hook a derived column component up to the error path that adds the name of the component producing the error to the data path. Thats that one taken care of!

"Do I really have to create a separate "add useful information to the error output" component, and use it on each error output?"

Whats the problem with doing that?

The main problem is that I don't know how!

Oh right. Sorry. Go here:

SSIS Nugget: Get error descriptions

(http://blogs.conchango.com/jamiethomson/archive/2005/08/08/SSIS-Nugget_3A00_-Get-error-descriptions.aspx)

John Saunders wrote:

The second problem is that there are about 50 or so places I'd have to add one of these components.

I'm not understanding why. If you push all the error outputs into a UNION ALL then you just need to add the code that I linked to above in a single component downstream of the UNION ALL. Unless I'm misunderstanding.

John Saunders wrote:

In particular, how can a downstream component learn about the component next-upstream?

It can't but I don't know why you'd want to.

-Jamie

|||

John Saunders wrote:

My question is: can I get a buffer index for a column not in my input columns collection and use that index to set the value of that column? I see that there's a Virtual Input and Virtual Input Column Collection, but I haven't seen examples of how to use it.

Spot on. Yes, you can do that. To be honest, the virtual input is what you see in the designer. The inputitself contais the columns that are selected at design-time. hence the columns in the input are a subset of teh volumns in the virtual input.

This code will return the virtual input and let you process the columns

Code Snippet

IDtsVirtualInput90 vInput = ComponentMetadata.InputCollection[0].GetVirtualInput();

foreach (IDtsVirtualInputColumn90 in vInput)

{

///blah blah blah

}

[that's done from memory so don't hang me if it doesn't work Smile ]

By the way, I will be publishing some code, via my blog and codeplex, in the next couple of days that does exactly this i.e. process data based on the virtual input. Keep a look out for that. Addrss is below.

-Jamie

Best practices: GROUP BY clause

I was wondering what the best way to write a GROUP BY clause when there are many (and time consuming) operations in the fields by grouped.

Fictious example:

SELECT DeptNo, AVG(Salary) FROM Department GROUP BY DeptNo;

This will give me the average salary per department. Let's say, however that
I had 10-15 fields being returned (along with the AVG(Salary)) and some fields even had operations being performed on them. Is it better to create a temporary table to calculate the sum per department (or a VIEW) and then
perform a JOIN with the rest of the data?

Fictious example:

SELECT DATENAME(y, StartDate), DATENAME(m, StartDate), DATEPART(d, StartDate), SUBSTR(DeptName, 1, 10), SomeFunction(SomeField), SomeFunction(SomeField), AVG(Salary)
GROUP BY DATENAME(y, StartDate), DATENAME(m, StartDate), DATEPART(d, StartDate), SUBSTR(DeptName, 1, 10), SomeFunction(SomeField), SomeFunction(SomeField);

Am I better off writing my query this way or using a JOIN on some temporary table or view?

ThanksWrite your query this way. One way to maximize the efficiency of a process is to reduce the number of times the server has to scan through the data. By putting all your aggregate functions in a single statement, the server only needs to run through the dataset one time.

But are all those datename and datepart functions necessary? That seems kind of wastefull. You could accomplish the same thing just by sorting by date.|||Bindman...That was just an example I made up. I am not asking this for a particular case right now, but I have in the past had queries that had many fields, and many of those fields had math/string/etc functions performed on them. Most of the time This was to provide formatting for a query that would be dumped into a report or out put to the user. For example, I might format an ID by Left padding with zeroes:
RIGHT(REPLICATE(MyPadChar, MyFieldWidth) + CAST(MyID AS VARCHAR), MyFieldWidth) AS [MyFormattedID]......

so all those fields would appear in my group by clause....I was wondering if this was a good practice.

Thanks|||It's acceptable in my opinion.

Anybody else want to comment on this?|||SELECT * FROM (
SELECT DATENAME(y, StartDate) AS Col1
, DATENAME(m, StartDate) AS Col2
, DATEPART(d, StartDate) AS Col3
, SUBSTRING(DeptName, 1, 10) AS Col4
, SomeFunction(SomeField) AS Col5
, SomeFunction(SomeField) AS Col6
, AVG(Salary) AS Col7
FROM myTable99) AS XXX
GROUP BY Col1, Col2, Col3, Col4, Col5, Col6|||Yeah, I thought about suggesting that. I've used it for clarity of coding before, but can you think of any reason it might or might not be more efficient? I guess the question is, when you include a formula in the output and also specify it in the GROUP BY clause, does the server calculate the formula twice, or is it smart enough to just calculate it once?|||Kaiser. Thanks for the hint...that'll clean things up a whole lot...
As for blindman, yeah, I'd like to know if evaluation takes place twice when the quesy is run.|||I guess the question is, when you include a formula in the output and also specify it in the GROUP BY clause, does the server calculate the formula twice, or is it smart enough to just calculate it once?

It had better be once, since it's a derived table...I never checked, but a SHOWPLAN should tell you what's up.

But again, this is M$, so you never know...

But since I'm a betting kinda guy...

$US1000.00 on Once to Win....|||Kaiser. Thanks for the hint....

Your welcome....Kaiser? Try Brett, x002548, or Ski (get it...)

Where are you in the world?|||Then would you agree that your subquery example would not be more efficient than coding formulas in the WHERE clause, though it scores points for clarity?|||No...would you agree that using formulas would cause non sargable predicates there by invalidating the use of any index?|||Yes, but the use of indexes is lost anyway when you filter on the results of forumulas in the subquery. I don't see how either of these methods would make efficient use of indexes.|||In the subquery, Query Analyser still complains about the subquery you mentioned. If it sees an aggregate function with other fields, you have to GROUP BY the other fields.
Sorry, I read it too fast the first time. I thought I saw:

SELECT ClientID AS Col1, OrderID AS Col2, SUM(Price) AS Col3
GROUP BY Col1, Col2;

I did not see the subquery and thought that what you gave me was similiar to
the ORDER BY 1, 2, 3... clause (instead of using actual column names)|||scores points for clarity? are you talking about post #5?

yes, assuming all the errors were fixed up!|||Yeah, I meant post #5. (the thread is getting a little long).

Um, the subquery won't run because of the aggregate missing a GROUP BY clause.

I also couldn't get the group clause to work on a column alias...|||select col1, col2, col3, col4, col5, col6, col7
from (
select datename(y, startdate) as col1
, datename(m, startdate) as col2
, datepart(d, startdate) as col3
, substring(deptname, 1, 10) as col4
, somefunction(somefield) as col5
, somefunction(somefield) as col6
, avg(salary) as col7
from mytable99
group
by datename(y, startdate)
, datename(m, startdate)
, datepart(d, startdate)
, substring(deptname, 1, 10)
, somefunction(somefield)
, somefunction(somefield)
) as xxx

Best practices: changing values

What is the best way to design your tables in cases where field values change?

Example:
CREATE TABLE Product (ProductID INT, Description VARCHAR(32), Price SMALLMONEY...);

CREATE TABLE Purchase (PurchaseID INT, ProductID INT, Quantity INT);

Since price obviously change over time, I was wondering what the is the best table schema to use to reflect these changes, while still remembering previous price values (like for generating reports on previous sales...)

is it better to include a "Price SMALLMONEY" field in the purchases table (which kind of de-normalizes it) or is it better to have a separate ProductPrice table that keeps track of changing prices like so:

CREATE TABLE ProductPrice (ProductID INT, Price SMALLMONEY, CreationDate DATETIME...);

and have the Purchase table reference the ProductPrice table instead of the products table?

I have used both methods in the past, but I was wanted to get other peoples' take on it.

ThanksBecause price can change for many reasons, I always keep it in the actual transaction row. For example, you might have different prices for a given product based on quantity purchased (for example buying 100 units gets a price break). There might be reasons for different prices based on the customer (one price for wholesale, one for sub-contractors, another price for retail). These differences could be either discreet or cumulative. In short, the price in the inventory table might only be a starting point, the price in the transaction table is the authoritive price for a transaction.

-PatP|||If you want to be able to track historical prices, such as how much a price has changed over time, then you need to add a time dimension to your price table.
But for a financial application such as this there is no substitute to storing the actual price paid in the transation table.|||(which kind of de-normalizes it)

No, it doesn't. :) It's an attribute of the purchase.

The purchase table should have the price paid at time of purchase.

There should be a ProductPrice table that holds the price historically for each price. If you want to avoid duplicating data, you can put the ProductPriceID in the Purchase table so you have the exact price at the time purchase was made.

Best Practices/Provider connecting to an Oracle Database?

Are there generalized best practices with regards to which method/provider to use when accessing an Oracle database? I have used both the "Native OLE DB\Microsoft OLE DB Provider for Oracle" and the "Native OLE DB\Oracle Provider for OLE DB" and both seem to have their own quirks (requirement to convert to Unicode, etc) but I also have heard that I shouldn't be using an "OLE DB" source at all, but to set it up as an ADO .Net connection.

We are just beginning to implement SSIS, and are trying to establish Best Practices/Standards etc.

Are there any gotchas - performance and/or otherwise I should know about?

Thanks in advance!

I'm assuming you've looked at the SSIS Connectivity whitepaper at http://ssis.wik.is/File:Connectivity_White_Paper/Connectivity_and_SQL_Server_Integration_Services_forum_post.doc (Oracle connectivity section).

We have plans to benchmark connectors in the future, when we might be able to share best practices and performance stats.

Best Practices, Create a Test DB Subset

Hello, we have a production DB that is 125 GB. Lots of tables. Is there an
automated way to extract say 10% of all those records? Thanks.Hi,
You have to write your own query to extract 10% data from each table.
Thanks
Hari
SQL Server MVP
"Pancho" <Pancho@.discussions.microsoft.com> wrote in message
news:7F1A5FC8-6431-49E0-812B-14ECEC8998CB@.microsoft.com...
> Hello, we have a production DB that is 125 GB. Lots of tables. Is there
> an
> automated way to extract say 10% of all those records? Thanks.|||That's going to be very time consuming as there are 100+ user tables. I was
hoping to perhaps create something more automated.
"Hari Pra" wrote:

> Hi,
> You have to write your own query to extract 10% data from each table.
> Thanks
> Hari
> SQL Server MVP
> "Pancho" <Pancho@.discussions.microsoft.com> wrote in message
> news:7F1A5FC8-6431-49E0-812B-14ECEC8998CB@.microsoft.com...
>
>|||Try something like this..
select
name, id
into
#TempTables
from
sysobjects
where
type = 'u'
declare @.CurrentID int
select @.CurrentID = min(id) from #TempTables
while (@.currentid is not null)
begin
-- Currently just selecting, but you could dynamically add new tables into
the DB from here.
declare @.sqltext nvarchar(2000)
set @.sqltext = N'select top 10 percent * from ' + (select name from
#TempTables where id = @.currentid)
exec sp_executesql @.sqltext
select @.CurrentID = min(id) from #TempTables where id > @.CurrentID
end
"Pancho" wrote:

> Hello, we have a production DB that is 125 GB. Lots of tables. Is there
an
> automated way to extract say 10% of all those records? Thanks.|||Pancho
See if this helps
EXEC sp_msforeachtable 'select top 10 percent * from ?'
"Pancho" <Pancho@.discussions.microsoft.com> wrote in message
news:D790FDC0-4F2D-452F-8E47-B73FAC4A9D95@.microsoft.com...
> That's going to be very time consuming as there are 100+ user tables. I
> was
> hoping to perhaps create something more automated.
> "Hari Pra" wrote:
>sql

Best practices to transfer data/tables from TEST to LIVE

HI all,
I'm definately a newbie and have what is probably a simple question
but I want to make sure I'm following good practices.
What is the best way to update a LIVE database with data being created
on a TEST/DEV database?
I want to send a few new records I created on a test environment onto
a live environment to a few different tables. It is only a couple
records and could probably manually do it in like 10 minutes. But I
assume eventually I'll have larger qualities of data to move and
tables to update and I don't want to forget to include soemthing.
What is the best way to handle this kind of stuff? Scripts? Any
other suggestions or different ideas?
Kind Regards,
Ray
You can achieve your goal scripting or using SSIS; scripting could seem the
simplest way for the job you described, but if you think that the same job
could be reused or subsequently implemented, you can build a package using
Business Intelligence Development Studio (BIDS). For more (really more!)
details, look at the Books on Line "SQL Server Integration Services (SSIS)"
chapter.
Gilberto Zampatti
"bcap" wrote:

> HI all,
> I'm definately a newbie and have what is probably a simple question
> but I want to make sure I'm following good practices.
> What is the best way to update a LIVE database with data being created
> on a TEST/DEV database?
> I want to send a few new records I created on a test environment onto
> a live environment to a few different tables. It is only a couple
> records and could probably manually do it in like 10 minutes. But I
> assume eventually I'll have larger qualities of data to move and
> tables to update and I don't want to forget to include soemthing.
> What is the best way to handle this kind of stuff? Scripts? Any
> other suggestions or different ideas?
> Kind Regards,
> Ray
>
|||On Jun 9, 3:24 am, Gilberto Zampatti
<GilbertoZampa...@.discussions.microsoft.com> wrote:
> You can achieve your goal scripting or using SSIS; scripting could seem the
> simplest way for the job you described, but if you think that the same job
> could be reused or subsequently implemented, you can build a package using
> Business Intelligence Development Studio (BIDS). For more (really more!)
> details, look at the Books on Line "SQL Server Integration Services (SSIS)"
> chapter.
> Gilberto Zampatti
>
> "bcap" wrote:
>
>
>
> - Show quoted text -
Thank you very much!

Best practices to transfer data/tables from TEST to LIVE

HI all,
I'm definately a newbie and have what is probably a simple question
but I want to make sure I'm following good practices.
What is the best way to update a LIVE database with data being created
on a TEST/DEV database?
I want to send a few new records I created on a test environment onto
a live environment to a few different tables. It is only a couple
records and could probably manually do it in like 10 minutes. But I
assume eventually I'll have larger qualities of data to move and
tables to update and I don't want to forget to include soemthing.
What is the best way to handle this kind of stuff? Scripts? Any
other suggestions or different ideas?
Kind Regards,
RayYou can achieve your goal scripting or using SSIS; scripting could seem the
simplest way for the job you described, but if you think that the same job
could be reused or subsequently implemented, you can build a package using
Business Intelligence Development Studio (BIDS). For more (really more!)
details, look at the Books on Line "SQL Server Integration Services (SSIS)"
chapter.
Gilberto Zampatti
"bcap" wrote:

> HI all,
> I'm definately a newbie and have what is probably a simple question
> but I want to make sure I'm following good practices.
> What is the best way to update a LIVE database with data being created
> on a TEST/DEV database?
> I want to send a few new records I created on a test environment onto
> a live environment to a few different tables. It is only a couple
> records and could probably manually do it in like 10 minutes. But I
> assume eventually I'll have larger qualities of data to move and
> tables to update and I don't want to forget to include soemthing.
> What is the best way to handle this kind of stuff? Scripts? Any
> other suggestions or different ideas?
> Kind Regards,
> Ray
>|||On Jun 9, 3:24 am, Gilberto Zampatti
<GilbertoZampa...@.discussions.microsoft.com> wrote:
> You can achieve your goal scripting or using SSIS; scripting could seem th
e
> simplest way for the job you described, but if you think that the same job
> could be reused or subsequently implemented, you can build a package using
> Business Intelligence Development Studio (BIDS). For more (really more!)
> details, look at the Books on Line "SQL Server Integration Services (SSIS)
"
> chapter.
> Gilberto Zampatti
>
> "bcap" wrote:
>
>
>
>
>
>
> - Show quoted text -
Thank you very much!

Best practices to transfer data/tables from TEST to LIVE

HI all,
I'm definately a newbie and have what is probably a simple question
but I want to make sure I'm following good practices.
What is the best way to update a LIVE database with data being created
on a TEST/DEV database?
I want to send a few new records I created on a test environment onto
a live environment to a few different tables. It is only a couple
records and could probably manually do it in like 10 minutes. But I
assume eventually I'll have larger qualities of data to move and
tables to update and I don't want to forget to include soemthing.
What is the best way to handle this kind of stuff? Scripts? Any
other suggestions or different ideas?
Kind Regards,
RayYou can achieve your goal scripting or using SSIS; scripting could seem the
simplest way for the job you described, but if you think that the same job
could be reused or subsequently implemented, you can build a package using
Business Intelligence Development Studio (BIDS). For more (really more!)
details, look at the Books on Line "SQL Server Integration Services (SSIS)"
chapter.
Gilberto Zampatti
"bcap" wrote:
> HI all,
> I'm definately a newbie and have what is probably a simple question
> but I want to make sure I'm following good practices.
> What is the best way to update a LIVE database with data being created
> on a TEST/DEV database?
> I want to send a few new records I created on a test environment onto
> a live environment to a few different tables. It is only a couple
> records and could probably manually do it in like 10 minutes. But I
> assume eventually I'll have larger qualities of data to move and
> tables to update and I don't want to forget to include soemthing.
> What is the best way to handle this kind of stuff? Scripts? Any
> other suggestions or different ideas?
> Kind Regards,
> Ray
>|||On Jun 9, 3:24 am, Gilberto Zampatti
<GilbertoZampa...@.discussions.microsoft.com> wrote:
> You can achieve your goal scripting or using SSIS; scripting could seem the
> simplest way for the job you described, but if you think that the same job
> could be reused or subsequently implemented, you can build a package using
> Business Intelligence Development Studio (BIDS). For more (really more!)
> details, look at the Books on Line "SQL Server Integration Services (SSIS)"
> chapter.
> Gilberto Zampatti
>
> "bcap" wrote:
> > HI all,
> > I'm definately a newbie and have what is probably a simple question
> > but I want to make sure I'm following good practices.
> > What is the best way to update a LIVE database with data being created
> > on a TEST/DEV database?
> > I want to send a few new records I created on a test environment onto
> > a live environment to a few different tables. It is only a couple
> > records and could probably manually do it in like 10 minutes. But I
> > assume eventually I'll have larger qualities of data to move and
> > tables to update and I don't want to forget to include soemthing.
> > What is the best way to handle this kind of stuff? Scripts? Any
> > other suggestions or different ideas?
> > Kind Regards,
> > Ray- Hide quoted text -
> - Show quoted text -
Thank you very much!

best practices to optimize a DB ?

What are the best pactices to optimize performance accsiing an SQLServer DB ?
commands, mantenance plan...
ThanksWow. That's a pretty wide open question. Here are some thoughts:

1a. Make sure you know what your backup and recovery plan is; performance isn't worth too much if a single point of failure or a user error causes an extended period of downtime.
1b. Make sure you TEST your recovery plan. If you don't, it's not a plan (it's something between a hope and a dream).
2. Make sure your performance and monitoring effort is in keeping with the priorities of the business; you can spend a LOT of time and money in this area, but if it isn't the business' priority, then you're wasting it.
3. Read, read, read.

MS SQL Server 2000 DBA Survival Guide (SAMS, 2000)
Microsoft SQL Server 2000 Operations Guide (Microsoft, 2001)
Inside SQL Server 2000 (Microsoft Press, 2000)
Books On Line

Regards,

hmscott

Best practices to design datawarehouse for a Fraud Detection/Transaction Monitoring Application

Hi All,

This thread relates to following thread of Architecure forum.

http://forums.microsoft.com/msdn/showpost.aspx?postid=1908857&siteid=1&sb=0&d=1&at=7&ft=11&tf=0&pageid=2

Because after reaching at decision that application would require datawarehousing the members suggested that it would be nice if it is posted on datawarehouse forum.

I would appreciate if could go through the mentioned thread that would state application's business requirement.

Now i am continuing the from the thread where we were discussing about SCD.

In our Application customer address is defined in 6 fields (Street,city,State,Country,Pincode,ContactNo) so if we are to maintain 3 type of addresses like Office,Communication,Residence then we have to add 18 fields in our Customer Master dimension table in our dataware house.

Suppose after some time customer office address is changed so what should be done in Customer Master Table at time of synchronization, Should ETL process insert a new record into dimension table with updated address in office address fields or it should update the existing record fields without adding new one.

If for Changing dimensions we are adding a new record then we have to maintain a flag field where status of record would be kept like 'current record' or 'old record' , this would help us to identify the main record.Every time whenever a new customer record is added old record status is changed to 'Old record'.

Should we add new record for changing fields like address,designation,employer,annual income etc or update the existing record only.Because in our analysis these all fields would play major because after each profile updation a risk category is assigned to customer.So we can not afford to overwrite any field information because at any time old information can be asked by management/regulator.

If every time for each field updation a new record is updated then customer master datawarehouse table would become 5 times if each customer record is updated five times during the year.

Suggest us the efficient approach to handle changing dimensions.

Thanks & Regards,

Sameer Gautam

Hi,

You don't have to create a record each time a change happens, just depends if you want to capture the history for that field or not. For example you may want to capture history for change of address, so you would add a new record, but you may not want it for surname, so just update all records for that customer with the new surname.

SCD are a pain to say the least when data warehousing, the principle is nice but coding it is sometimes complicated especially if you try and use snowflakes. And space soon becomes an issue, if you data changes regually. And you are right that if you make a change to a customer 5 times in a year then you will get 5 new records. Although you may code it so it only captures the end of year snapshot, rather than every change.

I would have a look through some of kimballs articles:

http://kimballgroup.com/html/articlesArchitecture/articlesAdvancedDim.html

Good luck

Matt

sql

best practices stored procedures and triggers

Using SS2000 SP4. We have a .NET app that accesses the sql server database.
I've tried to keep all access via stored procedures and views and the user
only has execute permissions on the sp and select on the views.
The contractors have put a trigger on one of the tables.
CREATE TRIGGER [SetPrimaryContact_Company] ON [dbo].[tblCompanyC
ontacts]
FOR INSERT, UPDATE
AS
IF (Select bitCompPrimaryContact From Inserted) = 1
Begin
Update tblCompanyContacts Set bitCompPrimaryContact = 0
Where numCompanyId = (Select numCompanyId From Inserted)
And numContactID <> (Select numContactId From Inserted)
End
So, the user needs select and update permissions on the table. Since I
didn't want to give insert, update or delete permissions on any tables to th
e
user is there a way around it? What is a best practice approach. Do I just
not use update triggers?
Thanks,
--
Dan D.As long as you are using stored procedures you do not need to use this
trigger. Tell the contractors to put this code within the stored procedure
that updates the table tblCompanyContacts.
Keith Kratochvil
"Dan D." <DanD@.discussions.microsoft.com> wrote in message
news:D9A15C06-49C3-4F1A-8EB1-884159C0184F@.microsoft.com...
> Using SS2000 SP4. We have a .NET app that accesses the sql server
> database.
> I've tried to keep all access via stored procedures and views and the user
> only has execute permissions on the sp and select on the views.
> The contractors have put a trigger on one of the tables.
> CREATE TRIGGER [SetPrimaryContact_Company] ON [dbo].[tblCompan
yContacts]
> FOR INSERT, UPDATE
> AS
> IF (Select bitCompPrimaryContact From Inserted) = 1
> Begin
> Update tblCompanyContacts Set bitCompPrimaryContact = 0
> Where numCompanyId = (Select numCompanyId From Inserted)
> And numContactID <> (Select numContactId From Inserted)
> End
> So, the user needs select and update permissions on the table. Since I
> didn't want to give insert, update or delete permissions on any tables to
> the
> user is there a way around it? What is a best practice approach. Do I just
> not use update triggers?
> Thanks,
> --
> Dan D.|||That sounds good. Thanks Keith.
--
Dan D.
"Keith Kratochvil" wrote:

> As long as you are using stored procedures you do not need to use this
> trigger. Tell the contractors to put this code within the stored procedur
e
> that updates the table tblCompanyContacts.
> --
> Keith Kratochvil
>
> "Dan D." <DanD@.discussions.microsoft.com> wrote in message
> news:D9A15C06-49C3-4F1A-8EB1-884159C0184F@.microsoft.com...
>
>|||I don't believe you don't need to grant any additional permissions on
the base table to users to execute a trigger. The "grant execute" on
the update or insert stored procedures should be enough.
Dan D. wrote:[vbcol=seagreen]
> That sounds good. Thanks Keith.
> --
> Dan D.
>
> "Keith Kratochvil" wrote:
>|||On Mon, 22 May 2006 06:52:02 -0700, Dan D. wrote:

>Using SS2000 SP4. We have a .NET app that accesses the sql server database.
>I've tried to keep all access via stored procedures and views and the user
>only has execute permissions on the sp and select on the views.
>The contractors have put a trigger on one of the tables.
>CREATE TRIGGER [SetPrimaryContact_Company] ON [dbo].[tblCompany
Contacts]
>FOR INSERT, UPDATE
>AS
>IF (Select bitCompPrimaryContact From Inserted) = 1
>Begin
>Update tblCompanyContacts Set bitCompPrimaryContact = 0
>Where numCompanyId = (Select numCompanyId From Inserted)
>And numContactID <> (Select numContactId From Inserted)
>End
Hi Dan,
First, fire the contractors. This trigger will fail on any multi-row
insert or update. No-one should write triggers like that and be paid for
it!

>So, the user needs select and update permissions on the table.
Says who? As long as the table is owned by the same user that owns the
trigger, the user doesn't need any permissions on the table - see the
repro script below.

> Since I
>didn't want to give insert, update or delete permissions on any tables to t
he
>user is there a way around it? What is a best practice approach. Do I just
>not use update triggers?
In most cases, I would recommend having all objects (tables, views,
stored procedures, triggers, functions, ...) owned by dbo. Then, you can
set permissions exactly as required. Due to all objects having the same
owner, all statements from within stored procedures, triggers, and
functions will work. (Unless you're using dynamic SQL).
Here's a repro to show that a user doesn't need permission on the table
if a trigger changes data in the table.
First, run the script below while connected with an accoount with
sufficient rights to add databases and logins:
USE master
go
DROP DATABASE test
go
sp_droplogin @.loginame = 'test'
go
CREATE DATABASE test
go
sp_addlogin @.loginame = 'test',
@.passwd = 'test',
@.defdb = 'test'
go
USE test
go
sp_grantdbaccess @.loginame = 'test'
go
CREATE TABLE tab (a int NOT NULL,
b int DEFAULT NULL)
go
CREATE VIEW vw
AS
SELECT a, b
FROM tab
go
CREATE TRIGGER trig
ON tab
FOR INSERT, UPDATE
AS
IF @.@.ROWCOUNT > 0
BEGIN
UPDATE tab
SET b = a + 15
WHERE b IS NULL
AND EXISTS
(SELECT *
FROM inserted
WHERE inserted.a = tab.a)
END
go
CREATE PROC ins_tab (@.a int)
AS
INSERT INTO tab (a)
VALUES (@.a)
go
DENY ALL ON tab TO test
DENY UPDATE, INSERT, DELETE ON vw TO test
GRANT SELECT ON vw TO test
GRANT EXECUTE ON ins_tab TO test
go
Then, open a new connection, log in with userid 'test' (use SQL Server
authentication, not integrated authentication) and execute the code
below:
-- Login as user 'test'
-- Fails - no permissions on tab
SELECT *
FROM tab
-- Succeeds - SELECT permission on vw granted
SELECT *
FROM vw
-- Fails - no INSERT permission on vw
INSERT INTO vw (a)
VALUES (3)
-- Succees - EXECUTE permission on ins_tab granted
EXECUTE ins_tab @.a = 3
-- Check that trigger has executed as well
SELECT *
FROM vw
go
Here's the output:
Msg 229, Level 14, State 5, Line 1
SELECT permission denied on object 'tab', database 'test', owner 'dbo'.
a b
-- --
Msg 229, Level 14, State 5, Line 1
INSERT permission denied on object 'vw', database 'test', owner 'dbo'.
a b
-- --
3 18
Hugo Kornelis, SQL Server MVP|||I see it now. Thanks for such a detailed explanation Hugo.
BTW, we did fire the contractors.
--
Dan D.
"Hugo Kornelis" wrote:

> On Mon, 22 May 2006 06:52:02 -0700, Dan D. wrote:
>
> Hi Dan,
> First, fire the contractors. This trigger will fail on any multi-row
> insert or update. No-one should write triggers like that and be paid for
> it!
>
> Says who? As long as the table is owned by the same user that owns the
> trigger, the user doesn't need any permissions on the table - see the
> repro script below.
>
> In most cases, I would recommend having all objects (tables, views,
> stored procedures, triggers, functions, ...) owned by dbo. Then, you can
> set permissions exactly as required. Due to all objects having the same
> owner, all statements from within stored procedures, triggers, and
> functions will work. (Unless you're using dynamic SQL).
> Here's a repro to show that a user doesn't need permission on the table
> if a trigger changes data in the table.
> First, run the script below while connected with an accoount with
> sufficient rights to add databases and logins:
> USE master
> go
> DROP DATABASE test
> go
> sp_droplogin @.loginame = 'test'
> go
> CREATE DATABASE test
> go
> sp_addlogin @.loginame = 'test',
> @.passwd = 'test',
> @.defdb = 'test'
> go
> USE test
> go
> sp_grantdbaccess @.loginame = 'test'
> go
> CREATE TABLE tab (a int NOT NULL,
> b int DEFAULT NULL)
> go
> CREATE VIEW vw
> AS
> SELECT a, b
> FROM tab
> go
> CREATE TRIGGER trig
> ON tab
> FOR INSERT, UPDATE
> AS
> IF @.@.ROWCOUNT > 0
> BEGIN
> UPDATE tab
> SET b = a + 15
> WHERE b IS NULL
> AND EXISTS
> (SELECT *
> FROM inserted
> WHERE inserted.a = tab.a)
> END
> go
> CREATE PROC ins_tab (@.a int)
> AS
> INSERT INTO tab (a)
> VALUES (@.a)
> go
> DENY ALL ON tab TO test
> DENY UPDATE, INSERT, DELETE ON vw TO test
> GRANT SELECT ON vw TO test
> GRANT EXECUTE ON ins_tab TO test
> go
> Then, open a new connection, log in with userid 'test' (use SQL Server
> authentication, not integrated authentication) and execute the code
> below:
> -- Login as user 'test'
> -- Fails - no permissions on tab
> SELECT *
> FROM tab
> -- Succeeds - SELECT permission on vw granted
> SELECT *
> FROM vw
> -- Fails - no INSERT permission on vw
> INSERT INTO vw (a)
> VALUES (3)
> -- Succees - EXECUTE permission on ins_tab granted
> EXECUTE ins_tab @.a = 3
> -- Check that trigger has executed as well
> SELECT *
> FROM vw
> go
> Here's the output:
> Msg 229, Level 14, State 5, Line 1
> SELECT permission denied on object 'tab', database 'test', owner 'dbo'.
> a b
> -- --
> Msg 229, Level 14, State 5, Line 1
> INSERT permission denied on object 'vw', database 'test', owner 'dbo'.
> a b
> -- --
> 3 18
>
> --
> Hugo Kornelis, SQL Server MVP
>|||The trigger they wrote also contained a rookie mistake. It was designed for
a single row at a time. When you are writing a trigger you always need to
plan on having multi row events occur at some point.
Snip--
Update tblCompanyContacts Set bitCompPrimaryContact = 0
Where numCompanyId = (Select numCompanyId From Inserted)
And numContactID <> (Select numContactId From Inserted)
End snip--
See how the subqueries assume that only one item will be returned. This code
will raise an error if there is more than one row in the Inserted table (in
other words, if a multi row update or insert occurs) and that would cause
the original transaction to be rolled back.
Far better would be the following:
---
Update tblCompanyContacts Set bitCompPrimaryContact = 0
FROM tblCompanyContacts C
JOIN Inserted I
ON I.numCompanyId = C.numCompanyId
And I.numContactID <> C.numContactId
---
This way no matter how many rows have been affected by the update or insert
statement the trigger will now handle them.
All of the advice you received on moving the logic to the stored procedure
has merit. You should understand the following. The trigger will fire for
any update or insert on your table. Some folks like to use triggers to
handle some of their data integrity enforcement and not worry about in their
procs. If you do put it in your procs, you will need to put that logic into
every proc that will update or insert into that table. If this is a large #
of procs then you are setting up a shotgun surgery approach to maintenance.
The other thing to be aware of is that your trigger fires as part of your
original transaction, so an error in it will cause your original action to
be rolled back, as well as the locks from your original action are still
maintained during the trigger. In this case that is not a bad thing, since
you want this to happen as part of the update, and if it fails, you want the
update to fail as well.
--
David Lundell
Principal Consultant and Trainer
www.MutuallyBeneficial.com
David@.MutuallyBeneficial.com|||So, is it probably safer to stay away from subqueries in triggers?
One of the things we have to be careful about with triggers is that we do a
lot of data manipulation on the backend. So we have to be sure that if we
insert 5M rows on the backend and not through the app, a trigger doesn't fir
e
and do something it shouldn't.
Thanks,
--
Dan D.
"David Lundell" wrote:

> The trigger they wrote also contained a rookie mistake. It was designed fo
r
> a single row at a time. When you are writing a trigger you always need to
> plan on having multi row events occur at some point.
> Snip--
> Update tblCompanyContacts Set bitCompPrimaryContact = 0
> Where numCompanyId = (Select numCompanyId From Inserted)
> And numContactID <> (Select numContactId From Inserted)
> End snip--
> See how the subqueries assume that only one item will be returned. This co
de
> will raise an error if there is more than one row in the Inserted table (i
n
> other words, if a multi row update or insert occurs) and that would cause
> the original transaction to be rolled back.
> Far better would be the following:
> ---
> Update tblCompanyContacts Set bitCompPrimaryContact = 0
> FROM tblCompanyContacts C
> JOIN Inserted I
> ON I.numCompanyId = C.numCompanyId
> And I.numContactID <> C.numContactId
> ---
> This way no matter how many rows have been affected by the update or inser
t
> statement the trigger will now handle them.
> All of the advice you received on moving the logic to the stored procedure
> has merit. You should understand the following. The trigger will fire for
> any update or insert on your table. Some folks like to use triggers to
> handle some of their data integrity enforcement and not worry about in the
ir
> procs. If you do put it in your procs, you will need to put that logic int
o
> every proc that will update or insert into that table. If this is a large
#
> of procs then you are setting up a shotgun surgery approach to maintenance
.
> The other thing to be aware of is that your trigger fires as part of your
> original transaction, so an error in it will cause your original action to
> be rolled back, as well as the locks from your original action are still
> maintained during the trigger. In this case that is not a bad thing, since
> you want this to happen as part of the update, and if it fails, you want t
he
> update to fail as well.
> --
> David Lundell
> Principal Consultant and Trainer
> www.MutuallyBeneficial.com
> David@.MutuallyBeneficial.com
>
>|||Subqueries are not the issue. It is the way the subqueries were written
Update tblCompanyContacts Set bitCompPrimaryContact = 0
Where numCompanyId IN(Select numCompanyId From Inserted)
And numContactID NOT IN (Select numContactId From Inserted)
This would also work, although it might not be as efficient as my earlier
example.
As for backend loading. I usually disable my triggers for the load and
reenable them afterwards. But only if I make certain to perform what my
trigger would perform. In this case setting the bitCompPrimaryContact = 0.
ALTER TABLE dbo.Widget DISABLE TRIGGER ALL
Do the load
ALTER TABLE dbo.Widget ENABLE TRIGGER ALL
David Lundell
Principal Consultant and Trainer
www.MutuallyBeneficial.com
David@.MutuallyBeneficial.com
"Dan D." <DanD@.discussions.microsoft.com> wrote in message
news:21CA8851-BF8B-4DAE-92AB-BBC85A239E11@.microsoft.com...[vbcol=seagreen]
> So, is it probably safer to stay away from subqueries in triggers?
> One of the things we have to be careful about with triggers is that we do
> a
> lot of data manipulation on the backend. So we have to be sure that if we
> insert 5M rows on the backend and not through the app, a trigger doesn't
> fire
> and do something it shouldn't.
> Thanks,
> --
> Dan D.
>
> "David Lundell" wrote:
>|||We're still working on a solution. We have many clients that we do mailings
for and we have hundreds of databases. I'm not sure it's the best way to try
and keep track of every trigger on every table.
We're an example of a small company that grew very fast without any
standards or best practices. Now we're trying to get everything under contro
l.
We recently had a case where a developer put a trigger on a table for an
application and didn't tell anyone about it. Adn to compound the problem, th
e
trigger didn't work like he thought it would. I guess he didn't test it
thoroughtly. When someone loaded some new data on the backend, the trigger
fired and updated 150,000+ rows that it shouldn't have. We didn't discovered
it until months later and we're trying to figure out how to roll it back.
Thanks for all of your help,
--
Dan D.
"David Lundell" wrote:

> Subqueries are not the issue. It is the way the subqueries were written
> Update tblCompanyContacts Set bitCompPrimaryContact = 0
> Where numCompanyId IN(Select numCompanyId From Inserted)
> And numContactID NOT IN (Select numContactId From Inserted)
> This would also work, although it might not be as efficient as my earlier
> example.
> As for backend loading. I usually disable my triggers for the load and
> reenable them afterwards. But only if I make certain to perform what my
> trigger would perform. In this case setting the bitCompPrimaryContact = 0.
> ALTER TABLE dbo.Widget DISABLE TRIGGER ALL
> Do the load
> ALTER TABLE dbo.Widget ENABLE TRIGGER ALL
>
> --
> David Lundell
> Principal Consultant and Trainer
> www.MutuallyBeneficial.com
> David@.MutuallyBeneficial.com
> "Dan D." <DanD@.discussions.microsoft.com> wrote in message
> news:21CA8851-BF8B-4DAE-92AB-BBC85A239E11@.microsoft.com...
>
>