Showing posts with label records. Show all posts
Showing posts with label records. Show all posts

Tuesday, March 27, 2012

Best solution, iterate over millions records and call extended sp

Hi,
I need to iterate over millions rows in a table and I need call an extended
stored procedure (written in C++ and not possible be written in TSQL) using
the columns of each row as parameters and write the return values to an new
table. The current script open a cursor.
What's the best way to implement it? BCP to a text file and external program
parse and write the text file and BCP back? sp_cmdshell an executible for
each row? (so needn't worry about C++ memory leak issu). XML?....
Thanks,"nick" <nick@.discussions.microsoft.com> wrote in message
news:75FAE6C1-EEC1-4D8B-A3A2-073F10CD859E@.microsoft.com...
> Hi,
> I need to iterate over millions rows in a table and I need call an
> extended
> stored procedure (written in C++ and not possible be written in TSQL)
> using
> the columns of each row as parameters and write the return values to an
> new
> table. The current script open a cursor.
> What's the best way to implement it? BCP to a text file and external
> program
> parse and write the text file and BCP back? sp_cmdshell an executible for
> each row? (so needn't worry about C++ memory leak issu). XML?....
> Thanks,
I'd say the "best" solution would be to rethink your architecture and
implement it differently if you plan to do this on a regular basis. This
doesn't sound like a very scalable or desirable way to use a client-server
database. Have you considered using SQL Server 2005, where you can implement
.NET code in the database? Or implementing your code in ADO rather than wit
h
an XP?
If it's just a one-off requirement then you just need to test which approach
works best for you. It isn't really a SQL question since, for the purposes
of this exercise, you are just using SQL Server as a file dump rather than
what it was designed for. Why not just loop in TSQL and call the proc for
each row?
David Portas
SQL Server MVP
--|||If you have a situation that calls for looping through a cursor, then it's
better to implement the cursor on the client side than on the server. Open a
read-only, forward only ADO recordset and Command.Execute the stored
procedure for each row.
"nick" <nick@.discussions.microsoft.com> wrote in message
news:75FAE6C1-EEC1-4D8B-A3A2-073F10CD859E@.microsoft.com...
> Hi,
> I need to iterate over millions rows in a table and I need call an
> extended
> stored procedure (written in C++ and not possible be written in TSQL)
> using
> the columns of each row as parameters and write the return values to an
> new
> table. The current script open a cursor.
> What's the best way to implement it? BCP to a text file and external
> program
> parse and write the text file and BCP back? sp_cmdshell an executible for
> each row? (so needn't worry about C++ memory leak issu). XML?....
> Thanks,sql

Best solution, iterate over millions records and call extended

In fact, the functionarity needs to be available on the server. So I will
created a program, maybe C# or C++ program to do the looping and calculate
and put the executible on the server so it can be launched via xp_cmdshell..
.
It should be better than big TSQL cursor?
"JT" wrote:

> If you have a situation that calls for looping through a cursor, then it's
> better to implement the cursor on the client side than on the server. Open
a
> read-only, forward only ADO recordset and Command.Execute the stored
> procedure for each row.
> "nick" <nick@.discussions.microsoft.com> wrote in message
> news:75FAE6C1-EEC1-4D8B-A3A2-073F10CD859E@.microsoft.com...
>
>In fact, my question is
Transact-SQL script "declare cursor" (fast forward) vs. Client side ADO code
with fast forward server cursor
Which one is better for very large rows?
"nick" wrote:
> In fact, the functionarity needs to be available on the server. So I will
> created a program, maybe C# or C++ program to do the looping and calculate
> and put the executible on the server so it can be launched via xp_cmdshell
..
> It should be better than big TSQL cursor?
> "JT" wrote:
>

Thursday, March 22, 2012

Best Practices, Create a Test DB Subset

Hello, we have a production DB that is 125 GB. Lots of tables. Is there an
automated way to extract say 10% of all those records? Thanks.Hi,
You have to write your own query to extract 10% data from each table.
Thanks
Hari
SQL Server MVP
"Pancho" <Pancho@.discussions.microsoft.com> wrote in message
news:7F1A5FC8-6431-49E0-812B-14ECEC8998CB@.microsoft.com...
> Hello, we have a production DB that is 125 GB. Lots of tables. Is there
> an
> automated way to extract say 10% of all those records? Thanks.|||That's going to be very time consuming as there are 100+ user tables. I was
hoping to perhaps create something more automated.
"Hari Pra" wrote:

> Hi,
> You have to write your own query to extract 10% data from each table.
> Thanks
> Hari
> SQL Server MVP
> "Pancho" <Pancho@.discussions.microsoft.com> wrote in message
> news:7F1A5FC8-6431-49E0-812B-14ECEC8998CB@.microsoft.com...
>
>|||Try something like this..
select
name, id
into
#TempTables
from
sysobjects
where
type = 'u'
declare @.CurrentID int
select @.CurrentID = min(id) from #TempTables
while (@.currentid is not null)
begin
-- Currently just selecting, but you could dynamically add new tables into
the DB from here.
declare @.sqltext nvarchar(2000)
set @.sqltext = N'select top 10 percent * from ' + (select name from
#TempTables where id = @.currentid)
exec sp_executesql @.sqltext
select @.CurrentID = min(id) from #TempTables where id > @.CurrentID
end
"Pancho" wrote:

> Hello, we have a production DB that is 125 GB. Lots of tables. Is there
an
> automated way to extract say 10% of all those records? Thanks.|||Pancho
See if this helps
EXEC sp_msforeachtable 'select top 10 percent * from ?'
"Pancho" <Pancho@.discussions.microsoft.com> wrote in message
news:D790FDC0-4F2D-452F-8E47-B73FAC4A9D95@.microsoft.com...
> That's going to be very time consuming as there are 100+ user tables. I
> was
> hoping to perhaps create something more automated.
> "Hari Pra" wrote:
>sql

Best Practices for Reducing Transaction Log Sizes?

Hi All,
I have a bunch of SQL servers that have a high volume of Bulk Inserts
happening every day (Millions of records per insert)
The most heavily used server has DBs in Simple Recovery Mode. However
both for the database and log files which are set to autogrow 10%, I am
seeing the following characteristics:
database files have about 10-15% free space every night.
log files however are about 70-90% free for all DBs
Now this is a problem since the log file size is 10gb and its using
only 2-3gb of it.
I tried using DBCC shrinkfile TRUNCATEONLY however it not only freed
the space but also reduced the log filesize to 0 and also one of my
developers complained of a table getting truncated during a stored proc
execution at that time.
1. is DBCC shrinkfile a good way to truncate free space from logs? How
could it have truncated the log to a size of 0.
2. What would be a good strategy to free up unused space from DB and
Log files every day?
Thanks all!
Hi
Good stuff to start
http://www.karaszi.com/sqlserver/info_dont_shrink.asp
<theonlyrashid@.gmail.com> wrote in message
news:1128600308.204142.67240@.g44g2000cwa.googlegro ups.com...
> Hi All,
> I have a bunch of SQL servers that have a high volume of Bulk Inserts
> happening every day (Millions of records per insert)
> The most heavily used server has DBs in Simple Recovery Mode. However
> both for the database and log files which are set to autogrow 10%, I am
> seeing the following characteristics:
> database files have about 10-15% free space every night.
> log files however are about 70-90% free for all DBs
> Now this is a problem since the log file size is 10gb and its using
> only 2-3gb of it.
> I tried using DBCC shrinkfile TRUNCATEONLY however it not only freed
> the space but also reduced the log filesize to 0 and also one of my
> developers complained of a table getting truncated during a stored proc
> execution at that time.
> 1. is DBCC shrinkfile a good way to truncate free space from logs? How
> could it have truncated the log to a size of 0.
> 2. What would be a good strategy to free up unused space from DB and
> Log files every day?
> Thanks all!
>

Best Practices for Reducing Transaction Log Sizes?

Hi All,
I have a bunch of SQL servers that have a high volume of Bulk Inserts
happening every day (Millions of records per insert)
The most heavily used server has DBs in Simple Recovery Mode. However
both for the database and log files which are set to autogrow 10%, I am
seeing the following characteristics:
database files have about 10-15% free space every night.
log files however are about 70-90% free for all DBs
Now this is a problem since the log file size is 10gb and its using
only 2-3gb of it.
I tried using DBCC shrinkfile TRUNCATEONLY however it not only freed
the space but also reduced the log filesize to 0 and also one of my
developers complained of a table getting truncated during a stored proc
execution at that time.
1. is DBCC shrinkfile a good way to truncate free space from logs? How
could it have truncated the log to a size of 0.
2. What would be a good strategy to free up unused space from DB and
Log files every day?
Thanks all!Hi
Good stuff to start
http://www.karaszi.com/sqlserver/info_dont_shrink.asp
<theonlyrashid@.gmail.com> wrote in message
news:1128600308.204142.67240@.g44g2000cwa.googlegroups.com...
> Hi All,
> I have a bunch of SQL servers that have a high volume of Bulk Inserts
> happening every day (Millions of records per insert)
> The most heavily used server has DBs in Simple Recovery Mode. However
> both for the database and log files which are set to autogrow 10%, I am
> seeing the following characteristics:
> database files have about 10-15% free space every night.
> log files however are about 70-90% free for all DBs
> Now this is a problem since the log file size is 10gb and its using
> only 2-3gb of it.
> I tried using DBCC shrinkfile TRUNCATEONLY however it not only freed
> the space but also reduced the log filesize to 0 and also one of my
> developers complained of a table getting truncated during a stored proc
> execution at that time.
> 1. is DBCC shrinkfile a good way to truncate free space from logs? How
> could it have truncated the log to a size of 0.
> 2. What would be a good strategy to free up unused space from DB and
> Log files every day?
> Thanks all!
>

Sunday, March 11, 2012

Best practice for Add, Edit records into database with lots of fields ?

What's the best practice for adding / editing a record into a database with lots of fields ?
I am not talking about the mechanics of it, as there are a lot of trivial examples using ADO.NET, stored procs, etc.

Deleting is easy, you just pass in (a few) primary key/keys to uniquely identify the record.

But in the real world when you have, say, a table with 100 fields! Do you code the INSERT sproc by hand, with 100 parameters... then call it with your ADO.NET code ? sounds like a lot of work to me...

What about updating! That's even worst, sometimes you may need to update only 3 or 4 fields, but using sprocs you would have to pass the whole 100 parameters in again, and "update" the whole record (when in fact you are only changing 3 or 4 fields).

With the update i could write different sprocs targeting only the fields i wish to update, but that sounds like duplicating work, vs having one generic update proc.

Sometimes i just feel like bypassing sprocs and having inline sql as it would be less work... but i know it is untidy.. and more potential to be buggy.

So come on guys (and gals)... let's hear your thoughts on how you would handle the insert / update scenarios when you have lots of fields ? Northwind examples are too trivial :-)

Hmm... more than one week and no comments, better file this one in the too hard basket... guess no one is willing to comment, or is the subject taboo? Where are all the certified professionals and / or other opinionated people... :-)

Or it could be there are no alternatives.

Saturday, February 25, 2012

Best design method to allow for "dynamic" records

First, a quick overview of my project. I'm designing a vehicle
tracking system that takes data from multiple types of GPS devices,
stores the data in a common database, and allows the user to view
device locations in real time or create reports on previous activity.
Currently we're only using one type of device, but I'm trying to
futureproof the app so I don't have to redesign it down the road.
Since the devices have different capabilities, I'm trying to come up
with the best method to store their data in a common format. For
example, say I have two devices, DeviceA and DeviceB. Both can report
their latitude, longitude, speed, heading, and a timestamp. DeviceA
can also report an odometer value, whether or not it has a GPS fix,
and telematics data. DeviceB cannot report those values. Along with
this data, every record will be tagged with address information on the
server side. Down the road, we may have DeviceC, DeviceD, etc with
new capabilities.
Now, the question is, how can I store all of this information in a way
that is simple to search/order/etc that is also "dynamic"? I've
thought about using three tables...one would contain "basic" record
info...latitude, longitude, speed, heading, timestamp, and a reason
code for why the record was sent (ignition on/off, start/stop, etc).
A second table would contain address information (street number,
street name, city, state, ZIP) and would be linked back to the basic
table. The third table would contain a single field with an XML
fragment detailing the rest of the information for that record
(odometer value, GPS fix status, telematics, etc).
Problem is, I can't find any method to allow me to search through XML
contained in a column other than a full text search. I also have the
problem of creating a result set containing all of the dynamic columns
(or selected ones only) to be returned to my ASP.NET application for
reporting.
Other methods I have thought about...storing all "extra" info in a
huge table with columns for each value. This would result in a lot of
wasted space (NULL values everywhere for devices that don't support
those features) and new columns would have to be added each time a new
device is supported (if new features are provided). Yet another
method...store values as key/value pairs in a separate table. This
would be rediculously slow though...I would have to generate some
heavy-duty dynamic SQL (crosstab query, basically) to pull all of the
values I need and dump them into a result set.
The data volume will be large (50K-60K records per day) and reporting
needs to be fairly responsive (web based reporting system...generate a
result set from SQL Server, typically using a date/time range, with
the required fields and pass back to data access layer for final
processing). So...out of the three methods I've thought about...any
comments or thoughts about which would be the best way to go? Are
there any other methods I should take into consideration? I know SQL
Server 2005 is supposed to have much improved XML support if I go that
route, but that's not an option at this point...I'm stuck with using
2000 for now.
Thanks for any help you can offer...it will be greatly appreciated!
Why dont you have a table like this for the extra info
(Vehicle ID, Capability ID, Value)
Vehicle ID + Capability ID will be the primary key. You can avoid the NULLs this way as you have rows for only those capabilities the vehicle has. If you think you dont add vehicles often to the system, then you can remove the VehicleID from the table and
create a table for every Vehicle with (CapabilityID and Value) as columns.
Is this a possible option?
Chandra
"Jeff L." wrote:

> First, a quick overview of my project. I'm designing a vehicle
> tracking system that takes data from multiple types of GPS devices,
> stores the data in a common database, and allows the user to view
> device locations in real time or create reports on previous activity.
> Currently we're only using one type of device, but I'm trying to
> futureproof the app so I don't have to redesign it down the road.
> Since the devices have different capabilities, I'm trying to come up
> with the best method to store their data in a common format. For
> example, say I have two devices, DeviceA and DeviceB. Both can report
> their latitude, longitude, speed, heading, and a timestamp. DeviceA
> can also report an odometer value, whether or not it has a GPS fix,
> and telematics data. DeviceB cannot report those values. Along with
> this data, every record will be tagged with address information on the
> server side. Down the road, we may have DeviceC, DeviceD, etc with
> new capabilities.
> Now, the question is, how can I store all of this information in a way
> that is simple to search/order/etc that is also "dynamic"? I've
> thought about using three tables...one would contain "basic" record
> info...latitude, longitude, speed, heading, timestamp, and a reason
> code for why the record was sent (ignition on/off, start/stop, etc).
> A second table would contain address information (street number,
> street name, city, state, ZIP) and would be linked back to the basic
> table. The third table would contain a single field with an XML
> fragment detailing the rest of the information for that record
> (odometer value, GPS fix status, telematics, etc).
> Problem is, I can't find any method to allow me to search through XML
> contained in a column other than a full text search. I also have the
> problem of creating a result set containing all of the dynamic columns
> (or selected ones only) to be returned to my ASP.NET application for
> reporting.
> Other methods I have thought about...storing all "extra" info in a
> huge table with columns for each value. This would result in a lot of
> wasted space (NULL values everywhere for devices that don't support
> those features) and new columns would have to be added each time a new
> device is supported (if new features are provided). Yet another
> method...store values as key/value pairs in a separate table. This
> would be rediculously slow though...I would have to generate some
> heavy-duty dynamic SQL (crosstab query, basically) to pull all of the
> values I need and dump them into a result set.
> The data volume will be large (50K-60K records per day) and reporting
> needs to be fairly responsive (web based reporting system...generate a
> result set from SQL Server, typically using a date/time range, with
> the required fields and pass back to data access layer for final
> processing). So...out of the three methods I've thought about...any
> comments or thoughts about which would be the best way to go? Are
> there any other methods I should take into consideration? I know SQL
> Server 2005 is supposed to have much improved XML support if I go that
> route, but that's not an option at this point...I'm stuck with using
> 2000 for now.
> Thanks for any help you can offer...it will be greatly appreciated!
>
|||Why dont you have a table like this for the extra info
(Vehicle ID, Capability ID, Value)
Vehicle ID + Capability ID will be the primary key. You can avoid the NULLs this way as you have rows for only those capabilities the vehicle has. If you think you dont add vehicles often to the system, then you can remove the VehicleID from the table and
create a table for every Vehicle with (CapabilityID and Value) as columns.
Is this a possible option?
Chandra
"Jeff L." wrote:

> First, a quick overview of my project. I'm designing a vehicle
> tracking system that takes data from multiple types of GPS devices,
> stores the data in a common database, and allows the user to view
> device locations in real time or create reports on previous activity.
> Currently we're only using one type of device, but I'm trying to
> futureproof the app so I don't have to redesign it down the road.
> Since the devices have different capabilities, I'm trying to come up
> with the best method to store their data in a common format. For
> example, say I have two devices, DeviceA and DeviceB. Both can report
> their latitude, longitude, speed, heading, and a timestamp. DeviceA
> can also report an odometer value, whether or not it has a GPS fix,
> and telematics data. DeviceB cannot report those values. Along with
> this data, every record will be tagged with address information on the
> server side. Down the road, we may have DeviceC, DeviceD, etc with
> new capabilities.
> Now, the question is, how can I store all of this information in a way
> that is simple to search/order/etc that is also "dynamic"? I've
> thought about using three tables...one would contain "basic" record
> info...latitude, longitude, speed, heading, timestamp, and a reason
> code for why the record was sent (ignition on/off, start/stop, etc).
> A second table would contain address information (street number,
> street name, city, state, ZIP) and would be linked back to the basic
> table. The third table would contain a single field with an XML
> fragment detailing the rest of the information for that record
> (odometer value, GPS fix status, telematics, etc).
> Problem is, I can't find any method to allow me to search through XML
> contained in a column other than a full text search. I also have the
> problem of creating a result set containing all of the dynamic columns
> (or selected ones only) to be returned to my ASP.NET application for
> reporting.
> Other methods I have thought about...storing all "extra" info in a
> huge table with columns for each value. This would result in a lot of
> wasted space (NULL values everywhere for devices that don't support
> those features) and new columns would have to be added each time a new
> device is supported (if new features are provided). Yet another
> method...store values as key/value pairs in a separate table. This
> would be rediculously slow though...I would have to generate some
> heavy-duty dynamic SQL (crosstab query, basically) to pull all of the
> values I need and dump them into a result set.
> The data volume will be large (50K-60K records per day) and reporting
> needs to be fairly responsive (web based reporting system...generate a
> result set from SQL Server, typically using a date/time range, with
> the required fields and pass back to data access layer for final
> processing). So...out of the three methods I've thought about...any
> comments or thoughts about which would be the best way to go? Are
> there any other methods I should take into consideration? I know SQL
> Server 2005 is supposed to have much improved XML support if I go that
> route, but that's not an option at this point...I'm stuck with using
> 2000 for now.
> Thanks for any help you can offer...it will be greatly appreciated!
>
|||Hi Jeff,
SQL 2005 in effect adds the ability to XQuery the data on a column; there is
the "XML" column type that allows this. Also the performance is quite good
since (from what I understand) data is optimized and indexed based on the
XSD information given when defining this field.
Of course this approach is not an option since Yukon is still many months
away.
See below on the poinst I suggest you to follow... and a mid-way solution
that can help!
Ciao,
Adriano

>...store values as key/value pairs in a separate table. This
> would be rediculously slow though...I would have to generate some
> heavy-duty dynamic SQL (crosstab query, basically) to pull all of the
> values I need and dump them into a result set.
Yes this kind of normalization is very good because you don't rely on actual
fields to store information, thus reducing space wasting.
Performance-speacking: SQL Server 2000 has a great set of features you can
use to improve querying speed.
1) Indexed views: You can perform aggregations on this table using Indexed
Views in order to have real-time view of your data in a "de-normalized" and
summarized way, where necessary.
2) Mantain only last-month data here so you have last-month queries quite
fast; move the oldest ones in a parallel "history "table. Report this table
only when explicitly requested by the user.

> The data volume will be large (50K-60K records per day) and reporting
> needs to be fairly responsive (web based reporting system...generate a
> result set from SQL Server, typically using a date/time range, with
> the required fields and pass back to data access layer for final
> processing). So...out of the three methods I've thought about...any
> comments or thoughts about which would be the best way to go? Are
> there any other methods I should take into consideration? I know SQL
> Server 2005 is supposed to have much improved XML support if I go that
> route, but that's not an option at this point...I'm stuck with using
> 2000 for now.
Another solution?
Build many tables, one for each "device type".. .where you can store "extra"
information without wasting space.

> Thanks for any help you can offer...it will be greatly appreciated!

Best design for edit tracking?

Hey all,
This is a general question. Here's my scenario: We have a legacy
database. The core table within contains almost 4 million records in
SQL Server. Currently we have a web front end which allows uses to
search through the database for the information they need.
What the client wants is a web front end which allows some users to
edit the core table above. No problem. What the client also wants is
for the non-editable web front end (used for research) to display (via
colored text) which records have been edited. Example:
1.) Jimmy edits records x, y, and z in coretable1 using the editing
web app.
2.) Jane comes along and uses the research web app to hunt down some
records. She views records s through z. In her view she notices that
certain cells in rows x, y, and z are colored red. This tells her that
Jimmy has edited those specific fields in those specific rows.
My question is: what is the most efficient way to track these column
specific edits so that my web app can display them? This may seem like
a web dev question, but the reality is that my web apps have to
interact with SQL Server so if anyone has any input, I'd love to hear
it! Thanks.OK, the quickest solution to this goest something like this...
Add two rows to your core table, the first is a DateTime with a Default
constraint of GetDate(). The second column is used to identify who made the
change.
Next when update one of the rows, you need to simply include the identifier
of the person who made the change.
Next when you read the table you need to read and compare two rows. The
most recent row, and the second most recent row. This provides the
information about what changed - as a freebie you'll gain access to the old
version of the row.
Another approach would be to have two tables. The first is the core table
as it is now. The second table contains a flag for each column in your core
table, the DateTime column (as above) and the user identifier column (as
above). This time when you edit the row, you also place a new row into this
table settings the flags for the columns which were altered. Don't forget
to include a foreign key back to your core table. When you select the row
from the core table you can also join on your change map, selecting the
max(datetime) and this will tell what columns were changed in the last edit.
But it won't tell you the values that they were before. There is a big
advantage in this method as your core table doesn't need to be altered.
Regards
Colin Dawson
www.cjdawson.com
"roy.@.nderson@.gm@.il.com" <roy.anderson@.gmail.com> wrote in message
news:1147534237.719954.218790@.j33g2000cwa.googlegroups.com...
> Hey all,
> This is a general question. Here's my scenario: We have a legacy
> database. The core table within contains almost 4 million records in
> SQL Server. Currently we have a web front end which allows uses to
> search through the database for the information they need.
> What the client wants is a web front end which allows some users to
> edit the core table above. No problem. What the client also wants is
> for the non-editable web front end (used for research) to display (via
> colored text) which records have been edited. Example:
> 1.) Jimmy edits records x, y, and z in coretable1 using the editing
> web app.
> 2.) Jane comes along and uses the research web app to hunt down some
> records. She views records s through z. In her view she notices that
> certain cells in rows x, y, and z are colored red. This tells her that
> Jimmy has edited those specific fields in those specific rows.
> My question is: what is the most efficient way to track these column
> specific edits so that my web app can display them? This may seem like
> a web dev question, but the reality is that my web apps have to
> interact with SQL Server so if anyone has any input, I'd love to hear
> it! Thanks.
>|||Colin
> Add two rows to your core table, the first is a DateTime with a Default
> constraint of GetDate(). The second column is used to identify who made
> the change.
I think you menat "add to columns", and it is worth mentioning that with
this solutuin you will have to write a trigget on that table in order to
track changes
"Colin Dawson" <newsgroups@.cjdawson.com> wrote in message
news:yRn9g.68620$wl.14113@.text.news.blueyonder.co.uk...
> OK, the quickest solution to this goest something like this...
> Add two rows to your core table, the first is a DateTime with a Default
> constraint of GetDate(). The second column is used to identify who made
> the change.
> Next when update one of the rows, you need to simply include the
> identifier of the person who made the change.
> Next when you read the table you need to read and compare two rows. The
> most recent row, and the second most recent row. This provides the
> information about what changed - as a freebie you'll gain access to the
> old version of the row.
> Another approach would be to have two tables. The first is the core table
> as it is now. The second table contains a flag for each column in your
> core table, the DateTime column (as above) and the user identifier column
> (as above). This time when you edit the row, you also place a new row
> into this table settings the flags for the columns which were altered.
> Don't forget to include a foreign key back to your core table. When you
> select the row from the core table you can also join on your change map,
> selecting the max(datetime) and this will tell what columns were changed
> in the last edit. But it won't tell you the values that they were before.
> There is a big advantage in this method as your core table doesn't need to
> be altered.
> Regards
> Colin Dawson
> www.cjdawson.com
>
> "roy.@.nderson@.gm@.il.com" <roy.anderson@.gmail.com> wrote in message
> news:1147534237.719954.218790@.j33g2000cwa.googlegroups.com...
>|||oops, I did mean columns yes.
A trigger won't really be able to help as you'll need to supply extra
information than what is stored in the original table. It would be better
to use a Stored procedure and directly enter the data into the new columns.
Of course, the exception to this is that if the application connects to SQL
using seperate usernames, it is possible to use the @.@.User in a trigger to
accomplish the same result. With the applications that my company creates,
this is not possible as they alway connect with the same user (connection
pooling)
Regards
Colin Dawson
www.cjdawson.com
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uSloXSxdGHA.4108@.TK2MSFTNGP03.phx.gbl...
> Colin
> I think you menat "add to columns", and it is worth mentioning that with
> this solutuin you will have to write a trigget on that table in order to
> track changes
>
> "Colin Dawson" <newsgroups@.cjdawson.com> wrote in message
> news:yRn9g.68620$wl.14113@.text.news.blueyonder.co.uk...
>|||On 13 May 2006 08:30:37 -0700, roy.@.nderson@.gm@.il.com wrote:

>Hey all,
>This is a general question. Here's my scenario: We have a legacy
>database. The core table within contains almost 4 million records in
>SQL Server. Currently we have a web front end which allows uses to
>search through the database for the information they need.
>What the client wants is a web front end which allows some users to
>edit the core table above. No problem. What the client also wants is
>for the non-editable web front end (used for research) to display (via
>colored text) which records have been edited. Example:
>1.) Jimmy edits records x, y, and z in coretable1 using the editing
>web app.
>2.) Jane comes along and uses the research web app to hunt down some
>records. She views records s through z. In her view she notices that
>certain cells in rows x, y, and z are colored red. This tells her that
>Jimmy has edited those specific fields in those specific rows.
>My question is: what is the most efficient way to track these column
>specific edits so that my web app can display them? This may seem like
>a web dev question, but the reality is that my web apps have to
>interact with SQL Server so if anyone has any input, I'd love to hear
>it! Thanks.
Hi Roy,
This is impossible to answer, because the requirements are incomplete.
For instance, what happens if Jimmy edits rows x, y, and z (as in your
example), then Joan edits rows w and x, then Jimmy edits z another time
and then Jane views rows s through z. What columns in what rows have to
be marked as "changed"?
Another question - what if Jimmy edits rows x, y, and z; then nothin
happens for a long time. After a year, Jane looks at rows s through z.
Should Jimmy's changes still be marked?
Hugo Kornelis, SQL Server MVP|||>
> My question is: what is the most efficient way to track these column
> specific edits so that my web app can display them? This may seem like
> a web dev question, but the reality is that my web apps have to
> interact with SQL Server so if anyone has any input, I'd love to hear
> it! Thanks.
There are several ways to accomplish that. Do you want your system to
be optimized for retrieval of the current version but support
occasional drill down into editing history. Or do you want to optimize
retrieving history of edits and are ready to pay the price of slowing
down retrieval of current version?

Friday, February 24, 2012

Best approach for pushing records to MS Access

All,

I am new to DTS/SSIS and have a couple of questions about using it to solve a problem. We have an application running on SQL Server 2005 where status records are written to a status table. I need to be able to send those records over to a status table in a legacy application running on Access.

Originally, I thought about writing a custom c# stored proc and accessing Access from it and then someone pointed me to DTS/SSIS.

Is there a way to exectute the package based on a trigger event that a row was inserted or updated? If not and I take a scheduled approach (every 3 minutes, etc.) do I have to maintain a column for the records that get processed so they are not picked up again.

In general is using SSIS the approach to take? The overall business requirements are straight forward, but I am not sure if SSIS is overkill for this or not.

Thanks,

Steve

If I use an Execute SQL Task on the Control Flow, how do I use that resulting dataset as a Data Source on the Data Flow? I added a variable named 0 and type object, but I cannot figure out how to reference it on the Data Flow designer tab.

Sunday, February 19, 2012

Benckmark. Inserting records.

How may inserts can SQL execute in a second?
The table that's inserting into has 3 fields (numeric, char(15) and
datetime)
and no other kind of SQL statements are running against it.
Hardware: quad proc, 2GB RAM, RAID.
TIA,
Nicthis is one of those 'it depends' issues...
depedning on the speed of your disks, number of indexes, other users on the
system, blocking, etc...
you could easily do hundreds or a few thousands per second on high end
hardware.
On 2 procs... I'd be thinking more in the range of hundreds... but only
testing will know for sure.
--
Brian Moran
Principal Mentor
Solid Quality Learning
SQL Server MVP
http://www.solidqualitylearning.com
"BN" <nc@.abc.com> wrote in message
news:e9hUkZcXDHA.652@.TK2MSFTNGP10.phx.gbl...
> How may inserts can SQL execute in a second?
> The table that's inserting into has 3 fields (numeric, char(15) and
> datetime)
> and no other kind of SQL statements are running against it.
> Hardware: quad proc, 2GB RAM, RAID.
> TIA,
> Nic
>|||First, paralellism is the key to optimizing data loading performance.
Create your insert routine so that it can easily be partitioned and
"paralellized".
Depending on your application, you might also consider using bulk insert,
bcp, or DTS - it should be possible to achieve 10's of 1000's of rows per
second with a table that narrow. I would estimate with a midrange server
you could easily go 50K/sec with bulk insert into an empty heap with only a
couple streams.
----
The views expressed here are my own
and not of my employer.
----
"Brian Moran" <brian@.solidqualitylearning.com> wrote in message
news:esoc4fcXDHA.3924@.tk2msftngp13.phx.gbl...
> this is one of those 'it depends' issues...
> depedning on the speed of your disks, number of indexes, other users on
the
> system, blocking, etc...
> you could easily do hundreds or a few thousands per second on high end
> hardware.
> On 2 procs... I'd be thinking more in the range of hundreds... but only
> testing will know for sure.
> --
> Brian Moran
> Principal Mentor
> Solid Quality Learning
> SQL Server MVP
> http://www.solidqualitylearning.com
>
> "BN" <nc@.abc.com> wrote in message
> news:e9hUkZcXDHA.652@.TK2MSFTNGP10.phx.gbl...
> > How may inserts can SQL execute in a second?
> >
> > The table that's inserting into has 3 fields (numeric, char(15) and
> > datetime)
> > and no other kind of SQL statements are running against it.
> >
> > Hardware: quad proc, 2GB RAM, RAID.
> >
> > TIA,
> >
> > Nic
> >
> >
>|||on a 2x2.4, using individual stored proc calls per single
line insert, i can get > 7K/sec using 10 separate threads
by consolidating more than 1 single row insert statement
into each stored procedure, >18k single row inserts/sec is
possible,
>30k rows/sec on multi-row inserts,
if you are a doing more than one single row insert in a
single stored proc., try using BEGIN/COMMIT TRAN even if
it is not required, this consolidates the transaction log
writes
go to the next sql server magazine connections conference
for more info, brian is there as well
www.sqlconnections.com
>--Original Message--
>How may inserts can SQL execute in a second?
>The table that's inserting into has 3 fields (numeric,
char(15) and
>datetime)
>and no other kind of SQL statements are running against
it.
>Hardware: quad proc, 2GB RAM, RAID.
>TIA,
>Nic
>
>.
>

Thursday, February 16, 2012

Beginning and End of Month Dates

Basically what I am trying to do is as Crystal reports goes through records I want it to check a date field to see if they are the first day or last day of the month. If the first day of the month falls on a Saturday, (DayofWeek = 7), I want it to decrement the date by 1 and if the last day of month falls on a Sunday, (DayofWeek = 1), I want it to decrement the date by 2.

Any ideas?The way of distinguishing between both months can be done like this:

If Month({DateField}) <> Month({DateField}+1) //Obviously is -1 to check for the 1st day
if DayOfWeek({DateField})=6 then
Do This
else if DayOfWeek({DateField})=7 then
Do That

I cant recall if DayOfWeek was 1-based or 0-based but the concept applies. :)|||I don't understand what you mean by:
"If Month({DateField}) <> Month({DateField}+1)"

What does this formula do?|||This formula only does the next set of steps IF the datefield is the last day of the month.

Ie, if you ran this today (31/5/05) :
"If Month({DateField}) <> Month({DateField}+1)"

If Month(31/5/05) <> Month(1/6/05)

...which is true. Today is the last day of the month.

If you want to know if the date is the first of the month, use -1.

Monday, February 13, 2012

Beginner question about inserting records

Hi,
I am new to SQL Server (version 8) and hope that someone can help me
solve a problem.
A colleague has set up a SQL Server database with several tables for
which he has constructed multiple applications. As a result, their
structure cannot be altered in any way.
I have received updates for the data -- but the updates are in dbase
files. I would like to insert these dbase files into the existing table
structures (a simple thing to do in Foxpro), but I can't seem to figure
out how to do it in SQL Server.
Can someone point me in the right direction? (I've searched the
archives and gotten a few hints about bulk insert but I'm missing some
critical piece of the puzzle.)
Many thanks,
Jo
<jacmcginty@.gmail.com> wrote in message
news:1116859509.543432.227390@.g14g2000cwa.googlegr oups.com...
> Hi,
> I am new to SQL Server (version 8) and hope that someone can help me
> solve a problem.
> A colleague has set up a SQL Server database with several tables for
> which he has constructed multiple applications. As a result, their
> structure cannot be altered in any way.
> I have received updates for the data -- but the updates are in dbase
> files. I would like to insert these dbase files into the existing table
> structures (a simple thing to do in Foxpro), but I can't seem to figure
> out how to do it in SQL Server.
> Can someone point me in the right direction? (I've searched the
> archives and gotten a few hints about bulk insert but I'm missing some
> critical piece of the puzzle.)
> Many thanks,
> Jo
>
Jo, Take a look at BULK INSERT and bcp from the books online. If these
will not suffice and you must perform transformations (data validation,
re-sequencing of keys, normalizing values (yes = 1, no = 0 etc).on the data
as it comes in., then take a look at Data Transformation Services (DTS).
DTS has a wizard for doing imports like you are talking about.
HTH
Rick Sawtell
MCT, MCSD, MCDBA
|||BULK INSERT is a TSQL command, but it doesn't understand complex file formats. What you should be
looking at is "DTS". which is a COM objects for import/export. There are three tools in SLQ Server
that uses this COM object:
DTS Wizard. This is the easiest one to use. Start here.
Package Designer. Use this after you done a few wizards.
DTSRUN.EXE. Schedule a package created with any of above two.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<jacmcginty@.gmail.com> wrote in message
news:1116859509.543432.227390@.g14g2000cwa.googlegr oups.com...
> Hi,
> I am new to SQL Server (version 8) and hope that someone can help me
> solve a problem.
> A colleague has set up a SQL Server database with several tables for
> which he has constructed multiple applications. As a result, their
> structure cannot be altered in any way.
> I have received updates for the data -- but the updates are in dbase
> files. I would like to insert these dbase files into the existing table
> structures (a simple thing to do in Foxpro), but I can't seem to figure
> out how to do it in SQL Server.
> Can someone point me in the right direction? (I've searched the
> archives and gotten a few hints about bulk insert but I'm missing some
> critical piece of the puzzle.)
> Many thanks,
> Jo
>
|||Hi, and thank you. The DTS wizard did the trick for this particular
problem (although I see that bulk insert may be helpful in the future).
May I ask a couple of follow-up questions regarding DTS?
1. Is it possible, using the DTS wizard, to change a field name on the
fly?
For example, some of the data tables that I am updating include
underscores in the field names. I need to remove the underscores to
make the updates compatible with previous issues of the data. (I could
do it manually after the import is completed, but since I expect to
repeat this exercise frequently, it would be helpful if I could make
this change during the import.)
2. Once a DTS package has been saved, can it be edited?
For example, I realized after I had saved a DTS and run it that one of
my field types was incorrect, but I couldn't see how to edit the DTS.
(I ended up recreating the whole thing to fix the error.)
Many thanks,
Jo

Beginner question about inserting records

Hi,
I am new to SQL Server (version 8) and hope that someone can help me
solve a problem.
A colleague has set up a SQL Server database with several tables for
which he has constructed multiple applications. As a result, their
structure cannot be altered in any way.
I have received updates for the data -- but the updates are in dbase
files. I would like to insert these dbase files into the existing table
structures (a simple thing to do in Foxpro), but I can't seem to figure
out how to do it in SQL Server.
Can someone point me in the right direction? (I've searched the
archives and gotten a few hints about bulk insert but I'm missing some
critical piece of the puzzle.)
Many thanks,
Jo<jacmcginty@.gmail.com> wrote in message
news:1116859509.543432.227390@.g14g2000cwa.googlegroups.com...
> Hi,
> I am new to SQL Server (version 8) and hope that someone can help me
> solve a problem.
> A colleague has set up a SQL Server database with several tables for
> which he has constructed multiple applications. As a result, their
> structure cannot be altered in any way.
> I have received updates for the data -- but the updates are in dbase
> files. I would like to insert these dbase files into the existing table
> structures (a simple thing to do in Foxpro), but I can't seem to figure
> out how to do it in SQL Server.
> Can someone point me in the right direction? (I've searched the
> archives and gotten a few hints about bulk insert but I'm missing some
> critical piece of the puzzle.)
> Many thanks,
> Jo
>
Jo, Take a look at BULK INSERT and bcp from the books online. If these
will not suffice and you must perform transformations (data validation,
re-sequencing of keys, normalizing values (yes = 1, no = 0 etc).on the data
as it comes in., then take a look at Data Transformation Services (DTS).
DTS has a wizard for doing imports like you are talking about.
HTH
Rick Sawtell
MCT, MCSD, MCDBA|||BULK INSERT is a TSQL command, but it doesn't understand complex file format
s. What you should be
looking at is "DTS". which is a COM objects for import/export. There are thr
ee tools in SLQ Server
that uses this COM object:
DTS Wizard. This is the easiest one to use. Start here.
Package Designer. Use this after you done a few wizards.
DTSRUN.EXE. Schedule a package created with any of above two.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<jacmcginty@.gmail.com> wrote in message
news:1116859509.543432.227390@.g14g2000cwa.googlegroups.com...
> Hi,
> I am new to SQL Server (version 8) and hope that someone can help me
> solve a problem.
> A colleague has set up a SQL Server database with several tables for
> which he has constructed multiple applications. As a result, their
> structure cannot be altered in any way.
> I have received updates for the data -- but the updates are in dbase
> files. I would like to insert these dbase files into the existing table
> structures (a simple thing to do in Foxpro), but I can't seem to figure
> out how to do it in SQL Server.
> Can someone point me in the right direction? (I've searched the
> archives and gotten a few hints about bulk insert but I'm missing some
> critical piece of the puzzle.)
> Many thanks,
> Jo
>|||Hi, and thank you. The DTS wizard did the trick for this particular
problem (although I see that bulk insert may be helpful in the future).
May I ask a couple of follow-up questions regarding DTS?
1. Is it possible, using the DTS wizard, to change a field name on the
fly?
For example, some of the data tables that I am updating include
underscores in the field names. I need to remove the underscores to
make the updates compatible with previous issues of the data. (I could
do it manually after the import is completed, but since I expect to
repeat this exercise frequently, it would be helpful if I could make
this change during the import.)
2. Once a DTS package has been saved, can it be edited?
For example, I realized after I had saved a DTS and run it that one of
my field types was incorrect, but I couldn't see how to edit the DTS.
(I ended up recreating the whole thing to fix the error.)
Many thanks,
Jo

Beginner question about inserting records

Hi,
I am new to SQL Server (version 8) and hope that someone can help me
solve a problem.
A colleague has set up a SQL Server database with several tables for
which he has constructed multiple applications. As a result, their
structure cannot be altered in any way.
I have received updates for the data -- but the updates are in dbase
files. I would like to insert these dbase files into the existing table
structures (a simple thing to do in Foxpro), but I can't seem to figure
out how to do it in SQL Server.
Can someone point me in the right direction? (I've searched the
archives and gotten a few hints about bulk insert but I'm missing some
critical piece of the puzzle.)
Many thanks,
Jo<jacmcginty@.gmail.com> wrote in message
news:1116859509.543432.227390@.g14g2000cwa.googlegroups.com...
> Hi,
> I am new to SQL Server (version 8) and hope that someone can help me
> solve a problem.
> A colleague has set up a SQL Server database with several tables for
> which he has constructed multiple applications. As a result, their
> structure cannot be altered in any way.
> I have received updates for the data -- but the updates are in dbase
> files. I would like to insert these dbase files into the existing table
> structures (a simple thing to do in Foxpro), but I can't seem to figure
> out how to do it in SQL Server.
> Can someone point me in the right direction? (I've searched the
> archives and gotten a few hints about bulk insert but I'm missing some
> critical piece of the puzzle.)
> Many thanks,
> Jo
>
Jo, Take a look at BULK INSERT and bcp from the books online. If these
will not suffice and you must perform transformations (data validation,
re-sequencing of keys, normalizing values (yes = 1, no = 0 etc).on the data
as it comes in., then take a look at Data Transformation Services (DTS).
DTS has a wizard for doing imports like you are talking about.
HTH
Rick Sawtell
MCT, MCSD, MCDBA|||BULK INSERT is a TSQL command, but it doesn't understand complex file formats. What you should be
looking at is "DTS". which is a COM objects for import/export. There are three tools in SLQ Server
that uses this COM object:
DTS Wizard. This is the easiest one to use. Start here.
Package Designer. Use this after you done a few wizards.
DTSRUN.EXE. Schedule a package created with any of above two.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<jacmcginty@.gmail.com> wrote in message
news:1116859509.543432.227390@.g14g2000cwa.googlegroups.com...
> Hi,
> I am new to SQL Server (version 8) and hope that someone can help me
> solve a problem.
> A colleague has set up a SQL Server database with several tables for
> which he has constructed multiple applications. As a result, their
> structure cannot be altered in any way.
> I have received updates for the data -- but the updates are in dbase
> files. I would like to insert these dbase files into the existing table
> structures (a simple thing to do in Foxpro), but I can't seem to figure
> out how to do it in SQL Server.
> Can someone point me in the right direction? (I've searched the
> archives and gotten a few hints about bulk insert but I'm missing some
> critical piece of the puzzle.)
> Many thanks,
> Jo
>|||Hi, and thank you. The DTS wizard did the trick for this particular
problem (although I see that bulk insert may be helpful in the future).
May I ask a couple of follow-up questions regarding DTS?
1. Is it possible, using the DTS wizard, to change a field name on the
fly?
For example, some of the data tables that I am updating include
underscores in the field names. I need to remove the underscores to
make the updates compatible with previous issues of the data. (I could
do it manually after the import is completed, but since I expect to
repeat this exercise frequently, it would be helpful if I could make
this change during the import.)
2. Once a DTS package has been saved, can it be edited?
For example, I realized after I had saved a DTS and run it that one of
my field types was incorrect, but I couldn't see how to edit the DTS.
(I ended up recreating the whole thing to fix the error.)
Many thanks,
Jo

Sunday, February 12, 2012

Beginner need help ... with triger?

I have a table that records data and I want to be able to copy a few of the feilds to another table in a second database whenever the data is inserted or updated. I assume the way to do that is thru the use of a trigger, but have no clue how to begin.

For example:

db #1 Table User has fields;
id, name, address, zip, password, date

db #2 table n_user has fields;
id, name, password

Hope thats enough information to get some assistance...use [Your active DB]
GO
create trigger [TR_User(,I,U,)]
on dbo.[user]
for insert,update
as begin
insert [Your history DB].dbo.[n_user] ([id], [name], [password])
select [id], [name], [password]
from inserted
end|||Thanks for you help.
I made the trigger but get this error on update of one field value.

Cannot insert explicit value for identity column in table 'FORUM_MEMBERS' when IDENTITY_INSERT is set to OFF.

Any clue??|||Difference between history and active table is difference between picture and movie.
You cannot use the same PK in both tables.

In [Your history DB].dbo.[n_user] table:
1,Add [oldid] column with datatype of [user].[id] column, PK will be on [n_user].[id] column.
2,Also consider using larger datatype on [n_user].[id] column, if [user] table is modified frequently.

3,Modify trigger:

use [Your active DB]
GO
alter trigger [TR_User(,I,U,)]
on dbo.[user]
for insert,update
as begin
insert [Your history DB].dbo.[n_user] ([oldid], [name], [password])
select [id], [name], [password]
from inserted
end
GO
delete [Your history DB].dbo.[n_user]
declare @.x int
update dbo.[user] set @.x=1

4...This will help sometimes
Adding FingerPrint timestamp column NOT NULL
Adding CreatedDate datetime column NOT NULL with default getdate().

5,Securing history table:

use [Your history DB]
GO
Deny insert,update,delete,references on dbo.[n_user] to public|||I see parts in the code for insert and delete... what about update?

Again... Thank you very much for your help.|||/*
You wrote:
"I have a table that records data and I want to be able to copy a few of the feilds to another table in a second database
WHENEVER THE DATA IS INSERTED OR UPDATED."
So you did not specify DELETED info.
*/

/* COMMENTED CODE */
--Switching to your active DB
use [Your active DB]
GO
--Creates trigger FOR INSERT,UPDATE on dbo.[user]
--This trigger uses INSERTED table of NEW VALUES for BOTH INSERT AND UPDATE
--( Look at "inserted tables" topic in BOL )
if object_id('TR_User(,I,U,)') is not null drop trigger [TR_User(,I,U,)]
GO
create trigger [TR_User(,I,U,)]
on dbo.[user]
for insert,update
as begin
insert [Your history DB].dbo.[n_user] ([oldid], [name], [password])
select [id], [name], [password]
from inserted
end
GO
--Deleting test rows in HISTORY
delete [Your history DB].dbo.[n_user]
--Filling history table with previosly inserted data (prehistoric)
declare @.x int
update dbo.[user] set @.x=1|||Ok... looks like the insert new member fires off the trigger just fine. Both tables are updated correctly. :)

However, when a user tries to update their profile... They recieve an sql error stating a primary key violation.

Here is the exact trigger that causes that error.

==============

CREATE trigger [TR_Players(,I,U,)]
on dbo.[players]
for insert,update
as begin
insert [Forum].dbo.[FORUM_MEMBERS] (M_name, M_username, M_password, M_email, M_quote)
select [PEmail], [PEmail], [Ppassword], [PEmail], [pcomments]
from inserted
end

=============

Now I went ahead and edited the trigger to be this,

=============

CREATE trigger [TR_Players(,I,)]
on dbo.[players]
for Insert
as begin
Insert [Forum].dbo.[FORUM_MEMBERS] (M_name, M_username, M_password, M_email, M_quote)
select [PEmail], [PEmail], [Ppassword], [PEmail], [pcomments]
from inserted
end

=============

As you might expect... this fires off correctly and both tables get their new information.

Now the question is how do I get the update trigger to update table 2 (FORUM_MEMBERS) without attempting to add another row with the same information, thus violating the pk constraint.

I tried to add a second trigger to the same table just for updates like such.

=============

CREATE trigger [TR_Players(,U,)]
on dbo.[players]
for Update
as begin
Insert [Forum].dbo.[FORUM_MEMBERS] (M_name, M_username, M_password, M_email, M_quote)
select [PEmail], [PEmail], [Ppassword], [PEmail], [pcomments]
from inserted
end

=============

Too bad that gave me the exact same error. Also, syntax wise... im kind of confused why I had to use "inserted" and not "updated" to get successful syntax checking? Does the temp table updated not exist with triggers? Can I not run two seperate triggers against the same table?

Oh and as an FYI. I did not want delete syntax... I was just commenting that in your post i saw the keyword delete?

Like always... thanks very much for your time to deal with my issues..|||CREATE trigger [TR_Players(,U,)]
on dbo.[players]
for Update
as begin
update fm set
fm.M_name = i.[PEmail]
, fm.M_username = i.[PEmail]
, fm.M_password = i.[Ppassword]
, fm.M_email = i.[PEmail]
, fm.M_quote = i.[pcomments]
from [Forum].dbo.[FORUM_MEMBERS] fm
join inserted i
on fm.M_name=i.[PEmail] --PK join - verify
end

--for more See http://dbforums.com/showthread.php?threadid=640545|||CREATE trigger [TR_Players(,U,)]
on dbo.[players]
for Update
as begin
update fm set
fm.M_name = i.[PEmail]
, fm.M_username = i.[PEmail]
, fm.M_password = i.[Ppassword]
, fm.M_email = i.[PEmail]
, fm.M_quote = i.[pcomments]
from [Forum].dbo.[FORUM_MEMBERS] fm
join inserted i
on fm.M_name=i.[PEmail] --PK join - verify
end

--for more See http://dbforums.com/showthread.php?threadid=640545|||You are like a God to me! Works great. Thank You Thank you! :)|||It seems that I need to make another trigger to pass changed passwords back to the original table. I attempted to modify the previous trigger to work in reverse. But no suck luck. I could use alittle assistance here.

I want the trigger to take the M_password field from the forum_members table (when its updated) and update the other database
table players with the new password. Here is what I came up with...

---------------
CREATE trigger [TR_pw(,U,)]
on [Forum].dbo.[FORUM_MEMBERS]
for Update
as begin
update pl set
pl.Ppassword = i.[M_password]
from dbo.[players] pl
join inserted i
on pl.pemail=i.m_name --PK Join - verify
end
---------------

Thanks.|||Your triggers are probably chaining, learn more about nesting
http://dbforums.com/showthread.php?threadid=640545|||you know your prolly correct... cuz the error message im recieving says something about exceeding the number of database connections.

Ive read the post you refered me to... but am still not sure what it said. Sorry... im new to this. Can I assume that my trigger is formed correctly? But I need to some how limit its ability to fire from an update by another trigger?|||Stike that...
I went to BOL and found how to turn off recursive triggers... I turned them off and it looks like its working. Is there any effect on regular system operation due to this trigger setting?

Thanks for your help.|||You can put "if trigger_nestlevel(@.@.procid)>1 return" in the beginning of your data modifying trigger. Checking-only triggers stay active in the trigger chain. Also even with switching nested triggers off I cannot make Ver1 (table mirroring) working. Also when you are using recursive algorithm for trigger, you must have recursive triggers on.
In most cases you do not need it.

--Ver1
create table A(X int)
create table B(X int)
GO
create trigger tiA on A for insert as
insert B select * from inserted
GO
create trigger tiB on B for insert as
insert A select * from inserted
GO
insert A values (1)
GO
drop table A
drop table B

--Ver2
create table A(X int)
create table B(X int)
GO
create trigger tiA on A for insert as
if trigger_nestlevel(@.@.procid)>1 return
insert B select * from inserted
GO
create trigger tiB on B for insert as
if trigger_nestlevel(@.@.procid)>1 return
insert A select * from inserted
GO
insert A values (1)
GO
drop table A
drop table B

Good luck !