Showing posts with label practice. Show all posts
Showing posts with label practice. Show all posts

Tuesday, March 27, 2012

best SCSI config

I am configuring a new db server and have a couple best practice questions.
I read somewhere that a server confgured for SQL server should have its OS
and transation log on a RAID 1 array, and the data file on 1 or more RAID 5
arrays.
1.) Is having a RAID 1 array for the OS and transaction log much better than
having just one RAID 5 array that has the OS, transaction log, and data files?
2.) We are recoding our web application to store pictures and files in the
database rather than on the web servers file system. Our application does a
lot of loading and displaying of photos. How resource intensive is saving a
photo and retreiving it from the database? Will it greatly slow down
non-photo transactions? Would I be wise to put all the photos and files (blob
data) on thier own RAID 5 array?
"Dan" wrote in message
news:C73453B5-76E7-4D6C-8A58-9ED1AC11A87A@.microsoft.com...
:I am configuring a new db server and have a couple best practice questions.
:
: I read somewhere that a server confgured for SQL server should have its OS
: and transation log on a RAID 1 array, and the data file on 1 or more RAID
5
: arrays.
:
: 1.) Is having a RAID 1 array for the OS and transaction log much better
than
: having just one RAID 5 array that has the OS, transaction log, and data
files?
:
: 2.) We are recoding our web application to store pictures and files in the
: database rather than on the web servers file system. Our application does
a
: lot of loading and displaying of photos. How resource intensive is saving
a
: photo and retreiving it from the database? Will it greatly slow down
: non-photo transactions? Would I be wise to put all the photos and files
(blob
: data) on thier own RAID 5 array?
Here is the idea:
RAID 1 (mirroring, duplexing) writes fast, reads normal
RAID 5 (distributed data guarding with parity) writes slow, reads very fast
Log files are written to more often than read from. RAID 1 is a performance
increase over RAID 5.
Data files are read more often than written to. RAID 5 is a performance
increase over RAID 1.
Saving a blob in a database is a waste, IMHO. Save the file on the data
drive and store a link to it in the database.
If your goal is performance, blobs are counter productive.
HTH...
Roland Hall
/* This information is distributed in the hope that it will be useful, but
without any warranty; without even the implied warranty of merchantability
or fitness for a particular purpose. */
Technet Script Center - http://www.microsoft.com/technet/scriptcenter/
WSH 5.6 Documentation - http://msdn.microsoft.com/downloads/list/webdev.asp
MSDN Library - http://msdn.microsoft.com/library/default.asp

best SCSI config

I am configuring a new db server and have a couple best practice questions.
I read somewhere that a server confgured for SQL server should have its OS
and transation log on a RAID 1 array, and the data file on 1 or more RAID 5
arrays.
1.) Is having a RAID 1 array for the OS and transaction log much better than
having just one RAID 5 array that has the OS, transaction log, and data file
s?
2.) We are recoding our web application to store pictures and files in the
database rather than on the web servers file system. Our application does a
lot of loading and displaying of photos. How resource intensive is saving a
photo and retreiving it from the database? Will it greatly slow down
non-photo transactions? Would I be wise to put all the photos and files (blo
b
data) on thier own RAID 5 array?"Dan" wrote in message
news:C73453B5-76E7-4D6C-8A58-9ED1AC11A87A@.microsoft.com...
:I am configuring a new db server and have a couple best practice questions.
:
: I read somewhere that a server confgured for SQL server should have its OS
: and transation log on a RAID 1 array, and the data file on 1 or more RAID
5
: arrays.
:
: 1.) Is having a RAID 1 array for the OS and transaction log much better
than
: having just one RAID 5 array that has the OS, transaction log, and data
files?
:
: 2.) We are recoding our web application to store pictures and files in the
: database rather than on the web servers file system. Our application does
a
: lot of loading and displaying of photos. How resource intensive is saving
a
: photo and retreiving it from the database? Will it greatly slow down
: non-photo transactions? Would I be wise to put all the photos and files
(blob
: data) on thier own RAID 5 array?
Here is the idea:
RAID 1 (mirroring, duplexing) writes fast, reads normal
RAID 5 (distributed data guarding with parity) writes slow, reads very fast
Log files are written to more often than read from. RAID 1 is a performance
increase over RAID 5.
Data files are read more often than written to. RAID 5 is a performance
increase over RAID 1.
Saving a blob in a database is a waste, IMHO. Save the file on the data
drive and store a link to it in the database.
If your goal is performance, blobs are counter productive.
HTH...
Roland Hall
/* This information is distributed in the hope that it will be useful, but
without any warranty; without even the implied warranty of merchantability
or fitness for a particular purpose. */
Technet Script Center - http://www.microsoft.com/technet/scriptcenter/
WSH 5.6 Documentation - http://msdn.microsoft.com/downloads/list/webdev.asp
MSDN Library - http://msdn.microsoft.com/library/default.asp

best SCSI config

I am configuring a new db server and have a couple best practice questions.
I read somewhere that a server confgured for SQL server should have its OS
and transation log on a RAID 1 array, and the data file on 1 or more RAID 5
arrays.
1.) Is having a RAID 1 array for the OS and transaction log much better than
having just one RAID 5 array that has the OS, transaction log, and data files?
2.) We are recoding our web application to store pictures and files in the
database rather than on the web servers file system. Our application does a
lot of loading and displaying of photos. How resource intensive is saving a
photo and retreiving it from the database? Will it greatly slow down
non-photo transactions? Would I be wise to put all the photos and files (blob
data) on thier own RAID 5 array?"Dan" wrote in message
news:C73453B5-76E7-4D6C-8A58-9ED1AC11A87A@.microsoft.com...
:I am configuring a new db server and have a couple best practice questions.
:
: I read somewhere that a server confgured for SQL server should have its OS
: and transation log on a RAID 1 array, and the data file on 1 or more RAID
5
: arrays.
:
: 1.) Is having a RAID 1 array for the OS and transaction log much better
than
: having just one RAID 5 array that has the OS, transaction log, and data
files?
:
: 2.) We are recoding our web application to store pictures and files in the
: database rather than on the web servers file system. Our application does
a
: lot of loading and displaying of photos. How resource intensive is saving
a
: photo and retreiving it from the database? Will it greatly slow down
: non-photo transactions? Would I be wise to put all the photos and files
(blob
: data) on thier own RAID 5 array?
Here is the idea:
RAID 1 (mirroring, duplexing) writes fast, reads normal
RAID 5 (distributed data guarding with parity) writes slow, reads very fast
Log files are written to more often than read from. RAID 1 is a performance
increase over RAID 5.
Data files are read more often than written to. RAID 5 is a performance
increase over RAID 1.
Saving a blob in a database is a waste, IMHO. Save the file on the data
drive and store a link to it in the database.
If your goal is performance, blobs are counter productive.
HTH...
--
Roland Hall
/* This information is distributed in the hope that it will be useful, but
without any warranty; without even the implied warranty of merchantability
or fitness for a particular purpose. */
Technet Script Center - http://www.microsoft.com/technet/scriptcenter/
WSH 5.6 Documentation - http://msdn.microsoft.com/downloads/list/webdev.asp
MSDN Library - http://msdn.microsoft.com/library/default.asp

Thursday, March 22, 2012

Best practices for reporting: Replicated servers + data warehouse server?

Good afternoon,
Lately there has been discussion of what the best practice would be for
enterprise reporting needs. Specifically, we currently have online OLTP
servers for our business apps, as well as a data warehouse server.
A member of our team has suggested that we create a new server farm (or
server) for real time reporting to alleviate the real time reporting
burden off the production servers.
I was looking for any advice or pointers to design a topology that can
support our current real time systems as well as data warehousing
needs, while still minimizing the burden of reporting against our live
production servers.
The mention of replication has come to mind, but I'm not completely
sold on attempting to replicate our production data for the "heck of
it"...
Any thoughts?
Best regards,
-Sean
Can you specify requirements for the reporting data? What would be the
maximum latency involved? I assume that reporting from dw isnt good enough
beacause of the latency...
MC
"Sean Aitken" <sean.aitken@.gmail.com> wrote in message
news:1139858184.012942.18310@.g14g2000cwa.googlegro ups.com...
> Good afternoon,
> Lately there has been discussion of what the best practice would be for
> enterprise reporting needs. Specifically, we currently have online OLTP
> servers for our business apps, as well as a data warehouse server.
> A member of our team has suggested that we create a new server farm (or
> server) for real time reporting to alleviate the real time reporting
> burden off the production servers.
> I was looking for any advice or pointers to design a topology that can
> support our current real time systems as well as data warehousing
> needs, while still minimizing the burden of reporting against our live
> production servers.
> The mention of replication has come to mind, but I'm not completely
> sold on attempting to replicate our production data for the "heck of
> it"...
> Any thoughts?
> Best regards,
> -Sean
>
|||Good morning MC,
There aren't any real specific requirements at this time. But, we do
have the need to ensure that real time reporting doesn't impact
production systems. Apparently, there are some applications that have
caused problems against performance of real time systems with some
reports.
I'm more of less just looking for some existing topology patterns that
have been proven to satisfy the needs of the business. Our organization
has about 2000 employees all over the world, and we have many systems
in place, SQL Server as well as Oracle.
My personal feeling is that any real time reporting be designed with
performance in mind and anything else should pull from the data
warehouse, with the granularity as designed into the reporting and
model requirements.
I wish I had a good case example, but my main motivation for
approaching this issue is that another developer on our team made a
proposal to actuall replicate the entire prod environment for all real
time reporting. I'm having a hard time buying into that idea.
Thanks for any insight!
Cheers!
-Sean
|||Hi Sean,
sure.....there are some useful books to read on my beginners page.
http://www.peternolan.com/Beginners/...0/Default.aspx
The best book on architecture is the Corporate Information Factory book
by Bill et al.
http://www.amazon.com/exec/obidos/AS...700697-6194236
Though for some reason Amazons links are not working at the moment.
CIF is a well defined and well articulated architecture for designers
to take into consideration when building end to end Information
Infrastructure for sizable companies. Well worth reading...
Peter
|||Thank you very much Peter!
I'll be checking them out today!
Cheers!
-Sean

Best practices for reporting: Replicated servers + data warehouse server?

Good afternoon,
Lately there has been discussion of what the best practice would be for
enterprise reporting needs. Specifically, we currently have online OLTP
servers for our business apps, as well as a data warehouse server.
A member of our team has suggested that we create a new server farm (or
server) for real time reporting to alleviate the real time reporting
burden off the production servers.
I was looking for any advice or pointers to design a topology that can
support our current real time systems as well as data warehousing
needs, while still minimizing the burden of reporting against our live
production servers.
The mention of replication has come to mind, but I'm not completely
sold on attempting to replicate our production data for the "heck of
it"...
Any thoughts?
Best regards,
-SeanCan you specify requirements for the reporting data? What would be the
maximum latency involved? I assume that reporting from dw isnt good enough
beacause of the latency...
MC
"Sean Aitken" <sean.aitken@.gmail.com> wrote in message
news:1139858184.012942.18310@.g14g2000cwa.googlegroups.com...
> Good afternoon,
> Lately there has been discussion of what the best practice would be for
> enterprise reporting needs. Specifically, we currently have online OLTP
> servers for our business apps, as well as a data warehouse server.
> A member of our team has suggested that we create a new server farm (or
> server) for real time reporting to alleviate the real time reporting
> burden off the production servers.
> I was looking for any advice or pointers to design a topology that can
> support our current real time systems as well as data warehousing
> needs, while still minimizing the burden of reporting against our live
> production servers.
> The mention of replication has come to mind, but I'm not completely
> sold on attempting to replicate our production data for the "heck of
> it"...
> Any thoughts?
> Best regards,
> -Sean
>|||Good morning MC,
There aren't any real specific requirements at this time. But, we do
have the need to ensure that real time reporting doesn't impact
production systems. Apparently, there are some applications that have
caused problems against performance of real time systems with some
reports.
I'm more of less just looking for some existing topology patterns that
have been proven to satisfy the needs of the business. Our organization
has about 2000 employees all over the world, and we have many systems
in place, SQL Server as well as Oracle.
My personal feeling is that any real time reporting be designed with
performance in mind and anything else should pull from the data
warehouse, with the granularity as designed into the reporting and
model requirements.
I wish I had a good case example, but my main motivation for
approaching this issue is that another developer on our team made a
proposal to actuall replicate the entire prod environment for all real
time reporting. I'm having a hard time buying into that idea.
Thanks for any insight!
Cheers!
-Sean|||Hi Sean,
sure.....there are some useful books to read on my beginners page.
http://www.peternolan.com/Beginners...40/Default.aspx
The best book on architecture is the Corporate Information Factory book
by Bill et al.
http://www.amazon.com/exec/obidos/A...5700697-6194236
Though for some reason Amazons links are not working at the moment.
CIF is a well defined and well articulated architecture for designers
to take into consideration when building end to end Information
Infrastructure for sizable companies. Well worth reading...
Peter|||Thank you very much Peter!
I'll be checking them out today!
Cheers!
-Sean

Best practices for remote users

Hi,
What is the best practice to allow remote users that are not part of our
domain to connect to a specific database on our server? Do we use Windows
Authentication by adding them to the domain or do we use sql authentication.
These users are in a different company and they will be logged on into their
active directory.
I would love to use windows authentication by adding them into our domain
but when setting up ODBC in the control panel, there is no option to send the
user credentials. ODBC seems to want to use the credentials of the currently
logged on user.
These users will be using Microsoft Access to run reports.
Thanks.
Your problem is the users are not logging on to your domain when they start
their machines. I doubt you can get Access to set up a connection using a
separate Windows Account - your best bet is to go with a SQL login - or you
could investigate using a VPN connection to see if that works...
"STech" <STech@.discussions.microsoft.com> wrote in message
news:57BAC952-CCBD-475D-AAD9-F81D8A696D2B@.microsoft.com...
> Hi,
> What is the best practice to allow remote users that are not part of our
> domain to connect to a specific database on our server? Do we use Windows
> Authentication by adding them to the domain or do we use sql
authentication.
> These users are in a different company and they will be logged on into
their
> active directory.
> I would love to use windows authentication by adding them into our domain
> but when setting up ODBC in the control panel, there is no option to send
the
> user credentials. ODBC seems to want to use the credentials of the
currently
> logged on user.
> These users will be using Microsoft Access to run reports.
> Thanks.
|||Hi STech,
I wanted to post a quick note to see if you would like additional
assistance or information regarding this particular issue. We appreciate
your patience and look forward to hearing from you!
Sincerely yours,
Michael Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!

Tuesday, March 20, 2012

Best Practices

In short, I am looking for a step by step best practice for making
database changes to a database that is using merge replication between
multiple locations.
Background information:
Using SQL 2000, I have a publisher and 3 subscribers of a very large
database. The database is in use 24/7/365. The entire database is
replicated.
In the next release of the application I need to update views, add
fields to certain tables, add entirely new tables, constraints, and
indexes.
Is there a way to implement these changes at the publisher and have it
update the subscribers or at least update the publication information
as the new tables will need to be replicated as well and I want to make
sure all associated rowguids, triggers, etc. are created internally for
replication.
Pls have a look at sp_repladdcolumn and sp_repldropcolumn in BOL. Also there
is my article on making changes to an existing column:
http://www.replicationanswers.com/AddColumn.asp
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .
|||To replicate schema only objects (like views, functions, stored procedures,
etc) use snapshot replication.
If you use sp_addmergearticle to add new articles (tables) to your
publication a snapshot of all of your tables will be generated. If you can
use a separate publication for these new articles.
Otherwise other schema changes can be performed by using sp_repladdcolumn
and sp_repldropcolumn. These procs are limited in what they can do, so you
might find yourself having to recreate the publications in some cases.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Steve B" <SBaxter.RBS@.gmail.com> wrote in message
news:1162949137.394153.305500@.k70g2000cwa.googlegr oups.com...
> In short, I am looking for a step by step best practice for making
> database changes to a database that is using merge replication between
> multiple locations.
> Background information:
> Using SQL 2000, I have a publisher and 3 subscribers of a very large
> database. The database is in use 24/7/365. The entire database is
> replicated.
> In the next release of the application I need to update views, add
> fields to certain tables, add entirely new tables, constraints, and
> indexes.
> Is there a way to implement these changes at the publisher and have it
> update the subscribers or at least update the publication information
> as the new tables will need to be replicated as well and I want to make
> sure all associated rowguids, triggers, etc. are created internally for
> replication.
>

Best Practice? SQL 2000 and 2005 on same server

(I tried to search for answers before posting, but had difficulty)

I have read in Microsoft forums that you "can" install SQL 2005 as an instance on a SQL 2000 server (not clustered.)

My decade+ of experience tells me it would be a bad idea, I'd expect the next service pack to fail or some other un-expected result. This is for a high availability application where the vendor requires SQL2000, and our custom coders want to use some SQL2005 featuers.

Does anyone have experience with two instances of different versions in a high visibility production system?

Does anyone have some points I could use to argue against this other than me sounding paranoid.

Thanks!I just completed a seminar that was hosted by Michael Hotek, author and MVP. He stated that SQL Server 2000 and 2005 (non-beta) can reside the same server with no problems for the 32 bit products. For 64 bit, there's a specific order to install both for both to function properly...but since you're probably talking about 32 bit, I won't get into that.|||I've been running the two side-by-side for many months on dev and test machines, atlhough not in production. I don't know of any problems with running them on the same machine (but I haven't tested every possible configuration either).

-PatPsql

Best Practice: Procedures: (Insert And Update) OR JUST (Save)

I have a Product Table.

And now I have to create its Stored Procedures.

I am asking the best practice regarding the methods Insert And Update.

There are two options.

1. Create separate 2 procedures like InsertProduct and UpdateProduct.

2. Create just 1 procedure like ModifyProduct. In which programmatically check that either the record is present or not. If present then update and if not then insert. Just like Imar has done in his articlehttp://imar.spaanjaars.com/QuickDocId.aspx?quickdoc=419

Can any one explain the better one.

Waiting for helpful replies.

http://imar.spaanjaars.com/QuickDocId.aspx?quickdoc=419

a

There's no "best practice" for this one. Imar presumably likes his "Save" approach because whether you are adding a new record or amending an existing one, generally software applications ask you to click the Save button - so he likes to make his programming logic analogous.

Personally, I prefer theKISS principal, and create 2 separate procedures. It's clear from the interface which one to call as a result of user action. I also see the decision as to whether to Insert or Update as being a business logic decision, and I'm uncomfortable about putting business logic in a stored procedure. The reason for this is that the business logic may not be transferable to another database platform.

|||

I do not understand your last point regardgin Business Logic.

I understand that it should be better in your opinion to create 2 separate procedures.

But what about Business Logic Methods.

|||

zeeshanuddinkhan@.hotmail.com:

I do not understand your last point regardgin Business Logic.

Well, I suppose it depends on how you define "Business Logic". And this illustrates one of the problems with layering an application. The reason why there are so many books and theories on architecture is because there is no "right" way to do it, and definitions of what belongs in which layer are different. Some things so obviously belong in certain layers, but other things might or might not - depending on what you are used to, how you think, what you are told to do by your team leader etc. There is for example, a huge debate about whether stored procedures are a bad thing altogether, because they can be viewed as placing business logic in a database and not in the BLL.

It also depends on how atomic (how much you like to break functionality down into discrete parts - methods, classes, procedures etc) you want your application. Imar would no doubt suggest that the action of the user defines that a Save() method be called, and that while the Save() method can include two alternative actions (Insert or Update), both lead to a row being saved to the database, so it's essentially the same action. The procedure decides whether an existing row is updated or a new one created. I see the difference between Insert and Update as being too different to be combined into one method. Consequently, I break the procedures apart into separate atomic constructs. I view the difference between the 2 as a business logic thing - because I can - and something in my gut tell me it is.

That's purely my view and is neither right or wrong. Others may not agree, and they will no doubt have valid justification for their view. It's right for me but wrong for Imar. And that's why I said at the beginning that there is no Best Practice for Insert or Update v Save. It's purely down to your personal preference. Imar's solution has a certain appeal, in that it contains a certain "cleverness". Some people like that. Nothing wrong with that at all.

Quite often the difference between two alternatives is purely philosophical, and has nothing to do with performance, maintainability or re-useability, which are the three items that Best Practice should be concerned with.

[Edit]

Just re-read my first response and having rambled on above, I see I may have missed your point. If you were asking about transferable business logic, it may be that you have to move the application to a different database system which doesn't support stored procedures, but may support basic INSERT, UPDATE, SELECT and DELETE saved queries. In this case, it wouldn't be too difficult to copy and paste the SQL form each part of the proc, but if you make procs do too much in terms of massaging data, or deciding on a course of action, you will create a load more work in your migration.

You are also perfectly free to ignore this on the basis that "it will never happen". Only you know best.

Best Practice: Primary key in joing table

hi there,

i have the following joining table (many-to-many relationship)...

CREATE TABLE [dbo].[products_to_products_swatch] (
[products_to_products_swatch_id] [int] IDENTITY (1, 1) NOT FOR REPLICATION NOT NULL ,
[product_id] [int] NOT NULL ,
[products_swatch_id] [int] NOT NULL
) ON [PRIMARY]
GO

question: do i need to include a primary key in this table - being that it is a joing table?

thanks
mikejoing table ? i mean joining table :o)|||If this is simply implementing a many-to-many join, then there is no need for a surrogate key. Just declare a composite primary key consisting of the foreign keys to both tables.
If you are storing additional information regarding the relationship (timestamp, notes, modifier, whatever) you may want to include a surrogate key for developmental consistency with your other tables, but it is not required.

Monday, March 19, 2012

Best Practice/recommendation dev data maint plans

We are working on converting to SQL 2005 database. During the conversion we are having to rewrite a lot of code and doing a lot of intital testing and development on development data. This is causing our transaction logs to really big. I have created a maint plan that runs nightly that does a back up of database and tran log but throughtout the day the tran logs are getting really big and eating up a ton of disk space. Does anyone have suggestions on what sort of maint plan I can setup to run on my developement data where as at this point I am not concened about being able to roll back the database just keep is small as possible and "healthly"

All ideas are appreciated

Thanks

Chris

Hi,

If your database has recovery model set to FULL you can schedule to take a T-Log backup on half/hourly to keep it in shape.

BTW are you doing Re-Indexing / Bulk Insertion-Updation !!!

Hemantgiri S. Goswami

|||

Would you recommend running the hourly t-log backups by creating a maint plan?

I have a maint plan running nightly that is perfoming reindexing, update stats, and shrink the db. We are also converting in data via text files and bulk inserting but when we do this we are shutting off as much of the logging as possible.

Would you happen to know of any articles, white papers or anything like that I could read up on?

Thanks for your time.

Chris

|||

Hi,

As you said that you have a maint plann running nightly which does ReIndexing... Re Indexing actually keep your T-Log file growing, what is a plan you have for shrinking!!! Refer http://hemantgirisgoswami.blogspot.com/2006/03/cause-for-t-log-become-full-and-how-to.html

Regards

Hemantgiri S. Goswami

|||Thanks for the suggestions. The articles are very helpfull.

Best practice...

Hi,
I was an oracle programmer recently moved to MSSql. So I'm trying to learn
the right way. Oracle was very cursor oriented & MSSQl seems to seriously
discourage using them at all wherever possible.
In this situation here's what I roughly written, with cursors - it'll give
you the direction I am going at least. I want to re-write properly without
cursors.
Declare cur_DeleteChild Cursor Scroll For
Select TableID
from tbl_TableMaster
Where ParentID in
(Select TableID from tbl_TableMaster where
supplierID = @.v_FromSupplierID)
Open cur_DeleteChild
Fetch First FROM cur_DeleteChild into @.ChildTableID
While (@.@.Fetch_Status <> -1)
Begin
exec sp_DeleteChild @.ChildTableID
END
Fetch Next FROM cur_DeleteChild into @.ChildTableID
Close cur_DeleteChild
Deallocate cur_DeleteChild
These are not large tables, nor will this sp be run often.
Rather than the subquery, I've read I should use temp table if necessary,
but in most cases I can avoid using them at all.
(I know temp tables do not perform the same purpose as cursors)
Thanks for any suggestions.I always use cursors and have not been sacked yet
depends on what you want to have happen in the cursor and the volumes of
data, they can be just as useful as in oracle.
Having said that I also use temp tables too.
"Lesley" wrote:

> Hi,
> I was an oracle programmer recently moved to MSSql. So I'm trying to lear
n
> the right way. Oracle was very cursor oriented & MSSQl seems to seriously
> discourage using them at all wherever possible.
> In this situation here's what I roughly written, with cursors - it'll give
> you the direction I am going at least. I want to re-write properly witho
ut
> cursors.
> Declare cur_DeleteChild Cursor Scroll For
> Select TableID
> from tbl_TableMaster
> Where ParentID in
> (Select TableID from tbl_TableMaster where
> supplierID = @.v_FromSupplierID)
> Open cur_DeleteChild
> Fetch First FROM cur_DeleteChild into @.ChildTableID
> While (@.@.Fetch_Status <> -1)
> Begin
> exec sp_DeleteChild @.ChildTableID
> END
> Fetch Next FROM cur_DeleteChild into @.ChildTableID
> Close cur_DeleteChild
> Deallocate cur_DeleteChild
> These are not large tables, nor will this sp be run often.
> Rather than the subquery, I've read I should use temp table if necessary,
> but in most cases I can avoid using them at all.
> (I know temp tables do not perform the same purpose as cursors)
> Thanks for any suggestions.|||Looks like a cascaded DELETE to me. Have you considered using the ON DELETE
CASCADE option? Otherwise you can use a DELETE trigger.
If your table has a self-referencing foreign key and represents a hierarchy
of some unknown depth then you'll possibly want to use a Recursive trigger.
However, there are other data models for this kind of hierarchy that may
serve you better. Google for "Materialized Path" or "Nested Sets" if you
aren't familiar with them.
David Portas
SQL Server MVP
--
"Lesley" wrote:

> Hi,
> I was an oracle programmer recently moved to MSSql. So I'm trying to lear
n
> the right way. Oracle was very cursor oriented & MSSQl seems to seriously
> discourage using them at all wherever possible.
> In this situation here's what I roughly written, with cursors - it'll give
> you the direction I am going at least. I want to re-write properly witho
ut
> cursors.
> Declare cur_DeleteChild Cursor Scroll For
> Select TableID
> from tbl_TableMaster
> Where ParentID in
> (Select TableID from tbl_TableMaster where
> supplierID = @.v_FromSupplierID)
> Open cur_DeleteChild
> Fetch First FROM cur_DeleteChild into @.ChildTableID
> While (@.@.Fetch_Status <> -1)
> Begin
> exec sp_DeleteChild @.ChildTableID
> END
> Fetch Next FROM cur_DeleteChild into @.ChildTableID
> Close cur_DeleteChild
> Deallocate cur_DeleteChild
> These are not large tables, nor will this sp be run often.
> Rather than the subquery, I've read I should use temp table if necessary,
> but in most cases I can avoid using them at all.
> (I know temp tables do not perform the same purpose as cursors)
> Thanks for any suggestions.|||> I always use cursors
Probably that explains why you can't cope with a 25 million row table. Set
based code is smarter, more scalable and easier to maintain than cursors.
Cursors are useful - if you don't know SQL.
David Portas
SQL Server MVP
--
"marcmc" wrote:
> I always use cursors and have not been sacked yet
> depends on what you want to have happen in the cursor and the volumes of
> data, they can be just as useful as in oracle.
> Having said that I also use temp tables too.
> "Lesley" wrote:
>|||Thanks David,
The wording used to create these tables has made it confusing.
Each row in the tbl_TableMaster contains the identity of a logical or
physical table - some self referencing, yes. The logical tables are childre
n
to the physical tables. The logical tables are the one's I'm deleting. The
rows that make up the logical child tables are held within a separate
phsyical table. That physical table will, in most situations, never be
deleted. I'm deleting the rows from the physical table that contains the
child rows & the row from tbl_TableMaster that identifies the logical child
table. That may have made it more confusing, but if you think there is
anything else that may apply to that situation please advise.
I will look into the recursive trigger, I have used a recursive call for
something similar and it worked well. I think that may serve my purpose.
Though I used a physical table to store my data - can I use a temp table whe
n
making a call on itself? I will also look into "Materialized Path" & "Neste
d
Sets" as I'm not familiar with them & they may serve my purpose.
Thanks again.
"David Portas" wrote:
> Looks like a cascaded DELETE to me. Have you considered using the ON DELET
E
> CASCADE option? Otherwise you can use a DELETE trigger.
> If your table has a self-referencing foreign key and represents a hierarch
y
> of some unknown depth then you'll possibly want to use a Recursive trigger
.
> However, there are other data models for this kind of hierarchy that may
> serve you better. Google for "Materialized Path" or "Nested Sets" if you
> aren't familiar with them.
> --
> David Portas
> SQL Server MVP
> --
>
> "Lesley" wrote:
>|||I'm not certain what you mean by "logical tables" in this context but it
sounds very much like a better design would solve your problems. You seem to
be confusing data with metadata.
David Portas
SQL Server MVP
--
"Lesley" <Lesley@.discussions.microsoft.com> wrote in message
news:F0ADAD7B-7D6D-44FA-B76F-665317A51BAB@.microsoft.com...
> Thanks David,
> The wording used to create these tables has made it confusing.
> Each row in the tbl_TableMaster contains the identity of a logical or
> physical table - some self referencing, yes. The logical tables are
> children
> to the physical tables. The logical tables are the one's I'm deleting.
> The
> rows that make up the logical child tables are held within a separate
> phsyical table. That physical table will, in most situations, never be
> deleted. I'm deleting the rows from the physical table that contains the
> child rows & the row from tbl_TableMaster that identifies the logical
> child
> table. That may have made it more confusing, but if you think there is
> anything else that may apply to that situation please advise.
> I will look into the recursive trigger, I have used a recursive call for
> something similar and it worked well. I think that may serve my purpose.
> Though I used a physical table to store my data - can I use a temp table
> when
> making a call on itself? I will also look into "Materialized Path" &
> "Nested
> Sets" as I'm not familiar with them & they may serve my purpose.
> Thanks again.
>
> "David Portas" wrote:
>|||Thanks,
I can't change the design at this point. I have to work with what has
already been developed & in production for a while.
By logical tables - it's just a bunch of rows of data held in one table that
have something in common - indicated by one row in the master table. You're
right, logical table isn't the right term - that's what everyone calls them
here in that particular situation. Metadata would be a better way to
describe it I think.
I think I can work without the cursors from here on in.
Thanks for your help.
Lesley.
"David Portas" wrote:

> I'm not certain what you mean by "logical tables" in this context but it
> sounds very much like a better design would solve your problems. You seem
to
> be confusing data with metadata.
> --
> David Portas
> SQL Server MVP
> --
> "Lesley" <Lesley@.discussions.microsoft.com> wrote in message
> news:F0ADAD7B-7D6D-44FA-B76F-665317A51BAB@.microsoft.com...
>
>|||Try something like this...
DECLARE @.ChildTableID INT
SELECT @.ChildTableID = 0x80000000
WHILE (1=1)
BEGIN
SELECT @.ChildTableID = MIN( TableID ) FROM tbl_TableMaster WHERE TableID >
@.ChildTableID
AND ParentID IN( SELECT TableID FROM tbl_TableMaster WHERE supplierID =
@.v_FromSupplierID )
IF @.ChildTableID IS NULL BREAK
EXEC sp_DeleteChild @.ChildTableID
END
It should run faster than the cursor and use less resources.
It also assumes that you never use "-2147483648" as a TableID.
"Lesley" <Lesley@.discussions.microsoft.com> wrote in message
news:B8354BE6-3FDD-4917-BC11-A54302FBCDEB@.microsoft.com...
> Hi,
> I was an oracle programmer recently moved to MSSql. So I'm trying to
learn
> the right way. Oracle was very cursor oriented & MSSQl seems to seriously
> discourage using them at all wherever possible.
> In this situation here's what I roughly written, with cursors - it'll give
> you the direction I am going at least. I want to re-write properly
without
> cursors.
> Declare cur_DeleteChild Cursor Scroll For
> Select TableID
> from tbl_TableMaster
> Where ParentID in
> (Select TableID from tbl_TableMaster where
> supplierID = @.v_FromSupplierID)
> Open cur_DeleteChild
> Fetch First FROM cur_DeleteChild into @.ChildTableID
> While (@.@.Fetch_Status <> -1)
> Begin
> exec sp_DeleteChild @.ChildTableID
> END
> Fetch Next FROM cur_DeleteChild into @.ChildTableID
> Close cur_DeleteChild
> Deallocate cur_DeleteChild
> These are not large tables, nor will this sp be run often.
> Rather than the subquery, I've read I should use temp table if necessary,
> but in most cases I can avoid using them at all.
> (I know temp tables do not perform the same purpose as cursors)
> Thanks for any suggestions.|||Thank you very much. I've just returned to this project after a few ws &
this code will do the trick.
Thanks again.
"Rebecca York" wrote:

> Try something like this...
> DECLARE @.ChildTableID INT
> SELECT @.ChildTableID = 0x80000000
> WHILE (1=1)
> BEGIN
> SELECT @.ChildTableID = MIN( TableID ) FROM tbl_TableMaster WHERE TableID
>
> @.ChildTableID
> AND ParentID IN( SELECT TableID FROM tbl_TableMaster WHERE supplierID =
> @.v_FromSupplierID )
> IF @.ChildTableID IS NULL BREAK
> EXEC sp_DeleteChild @.ChildTableID
> END
> It should run faster than the cursor and use less resources.
> It also assumes that you never use "-2147483648" as a TableID.
>
> "Lesley" <Lesley@.discussions.microsoft.com> wrote in message
> news:B8354BE6-3FDD-4917-BC11-A54302FBCDEB@.microsoft.com...
> learn
> without
>
>

Best Practice, Stored Procedures & Datasets

Hi,
Would be interested to hear your thoughts on whether is best to minimise or
maximise the use of stored procedures in SQL Reporting. Is it best to have
as much as possible coming from Stored Procedures or is it better to have as
little as possible. At this stage I am more concerned about management
rather than performance. However I'd love to hear arguments from all sides !
Cheers,
JayI prefer to use stored procedures as much as possible. There are two causes
for this: I think it's more efficient to let SQL Server handle the
processing needed to return the correct dataset and just use RS to do the
formatting, and it's usually easier to change the stored procedures later
than your report. But if you know you won't be able to access the SQL server
later, you should put the queries and logic in your RS report.
Kaisa M. Lindahl
"Jay Sanderson" <jay@.REMOVEMEacttiv.com> wrote in message
news:evEMmciJGHA.604@.TK2MSFTNGP14.phx.gbl...
> Hi,
> Would be interested to hear your thoughts on whether is best to minimise
> or maximise the use of stored procedures in SQL Reporting. Is it best to
> have as much as possible coming from Stored Procedures or is it better to
> have as little as possible. At this stage I am more concerned about
> management rather than performance. However I'd love to hear arguments
> from all sides !
> Cheers,
> Jay
>|||Yes, I would recommend using Stored Procedures as much as possible as you can
usually reuse them in other reports. I find it particualarly useful to use
SPs for returning default paramaters as they are usually the same across most
reports I build and if you need to change any logic you just do it once and
you dont even need to republish the report or anything.
If you store the sql query in the report itself this will become
unmanageable as the amount of reports increase. Any small business logic
change will mean you have to trawl through all your reports to edit your
queries and then republsh the reports. SPs will save you all this hassle.
"Kaisa M. Lindahl" wrote:
> I prefer to use stored procedures as much as possible. There are two causes
> for this: I think it's more efficient to let SQL Server handle the
> processing needed to return the correct dataset and just use RS to do the
> formatting, and it's usually easier to change the stored procedures later
> than your report. But if you know you won't be able to access the SQL server
> later, you should put the queries and logic in your RS report.
> Kaisa M. Lindahl
> "Jay Sanderson" <jay@.REMOVEMEacttiv.com> wrote in message
> news:evEMmciJGHA.604@.TK2MSFTNGP14.phx.gbl...
> > Hi,
> >
> > Would be interested to hear your thoughts on whether is best to minimise
> > or maximise the use of stored procedures in SQL Reporting. Is it best to
> > have as much as possible coming from Stored Procedures or is it better to
> > have as little as possible. At this stage I am more concerned about
> > management rather than performance. However I'd love to hear arguments
> > from all sides !
> >
> > Cheers,
> >
> > Jay
> >
>
>

Best Practice when copy table from srv to srv

Hi!

My first post in this great forum. :)
Here goes:

I need some feedback on best practice (or just possible practice!) on creating a copy of a table from one SQLserver to another SQLserver.

I have a stored proc that loops some srv/databases/table-names and need to copy a specific table out to them all.

It works ok on the local server, but when i want to go across to another server trouble starts.

I have tried various approaches.

1) Linked server followed by "Insert into remotesrv.remotedb.dbo.tabel..."
result: cant run ALTER query in remote srv. SELECT statements works fine though.

2) Replication/Subscription
result: Works in general, but it only syncronizes alike tabels. Cant alter structure of table on remote.

3) DTS
result: Works fine, but not generic enough (variable tablenames needed).

What do you guys use in these situations?Ok no replys :)

For future reference I chose the following:

If fact 2) Replication/Subscription are open for alterations of tabel stucture.(I just needed to refresh my snapshot-file in the test)

The copy of tables are therefore done via replication triggerede by a stored procedure.

Best practice using EM and Windows Authentification

Hi,
We have a team of IT people with mixed profiles.
If we use windows authentification we should be able to see all our
SQL Servers but with limited rights (readonly).
If we use a special admin windows account we should be able to have
full rights.
How can you accomplish this?
Allways logged in with my own user account and using EM and Query
Analyser for readonly stuff and use EM and Query Analyser from time to
time to change something BUT without logging off on your local machine
and logging on with your admin account.
Creating different MMC's, runas,...?
Can anybody point me in the right direction?
Thanks!!
Fred"Freddy" <fromheretoeternity@.hotmail.com> wrote in message
news:b9e50d08.0411230826.576b6cf7@.posting.google.com...

> We have a team of IT people with mixed profiles.
> If we use windows authentification we should be able to see all our
> SQL Servers but with limited rights (readonly).
> If we use a special admin windows account we should be able to have
> full rights.
> How can you accomplish this?
Define a new group that includes all of the accounts that should have full
rights and defined that group on SQL Server and grant system administrator
permissions to that group.

> Allways logged in with my own user account and using EM and Query
> Analyser for readonly stuff and use EM and Query Analyser from time to
> time to change something BUT without logging off on your local machine
> and logging on with your admin account.
> Creating different MMC's, runas,...?
> Can anybody point me in the right direction?
Create a short cut to the EM and QA MMC's on the desktops. Train your
admins, when performing administrative activity, on the appropriate MMC use
shift/right click, choose Run As... and enter the administrative credentials
to get full rights.
Steve

Best Practice to update SQL Server Database Tables & Procedures

We will going to have no remote access to the SQL Server 2005â?¦ we as
developers can build and test in our place.
After successfully testing the codesâ?¦ web pages and SQL Server Database new
tables and stored procedures has to be carried physically in the USB Flash
disk and required to go to the host company server location and update the
web pages and SQL Server 2005 Database tables.
What is the best practice â?¦ if the situation is that we have to carry the
table and its data inside table physically to the host location and login to
the server and connect USB flash drive and update tables in SQL Server..
What is the best practice to perform update by going physical to the host
company for make SQL Server 2005 Database changes?> What is the best practice â?¦ if the situation is that we have to carry the
> table and its data inside table physically to the host location and login
> to
> the server and connect USB flash drive and update tables in SQL Server..
> What is the best practice to perform update by going physical to the host
> company for make SQL Server 2005 Database changes?
In both cases, the normal approach is to perform new installations and
upgrades using SQL scripts. You can use a tool like SQLCMD to execute the
scripts from a command file. For upgrades, it is important to test against
a production database replica to ensure the database is properly upgraded.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"TalalSaleem" <TalalSaleem@.discussions.microsoft.com> wrote in message
news:373C5072-E4D4-4F7C-A827-266DAA35C4E4@.microsoft.com...
> We will going to have no remote access to the SQL Server 2005â?¦ we as
> developers can build and test in our place.
> After successfully testing the codesâ?¦ web pages and SQL Server Database
> new
> tables and stored procedures has to be carried physically in the USB Flash
> disk and required to go to the host company server location and update the
> web pages and SQL Server 2005 Database tables.
> What is the best practice â?¦ if the situation is that we have to carry the
> table and its data inside table physically to the host location and login
> to
> the server and connect USB flash drive and update tables in SQL Server..
> What is the best practice to perform update by going physical to the host
> company for make SQL Server 2005 Database changes?|||On Tue, 15 Jan 2008 07:04:25 -0600, "Dan Guzman"
<guzmanda@.nospam-online.sbcglobal.net> wrote:
>> What is the best practice ? if the situation is that we have to carry the
>> table and its data inside table physically to the host location and login
>> to
>> the server and connect USB flash drive and update tables in SQL Server..
>> What is the best practice to perform update by going physical to the host
>> company for make SQL Server 2005 Database changes?
>In both cases, the normal approach is to perform new installations and
>upgrades using SQL scripts. You can use a tool like SQLCMD to execute the
>scripts from a command file. For upgrades, it is important to test against
>a production database replica to ensure the database is properly upgraded.
I think he's asking more about data. Say you need to send someone 1gb
of data, to populate a table, to update a database, etc.
I'd say you can use a good old ASCII CSV or flat file, but of course
you need some kind of import logic, typically some staging tables and
an SSIS package, to do the work.
Josh

Best Practice to update SQL Server Database Tables & Procedures

We will going to have no remote access to the SQL Server 2005… we as
developers can build and test in our place.
After successfully testing the codes… web pages and SQL Server Database new
tables and stored procedures has to be carried physically in the USB Flash
disk and required to go to the host company server location and update the
web pages and SQL Server 2005 Database tables.
What is the best practice … if the situation is that we have to carry the
table and its data inside table physically to the host location and login to
the server and connect USB flash drive and update tables in SQL Server..
What is the best practice to perform update by going physical to the host
company for make SQL Server 2005 Database changes?
> What is the best practice … if the situation is that we have to carry the
> table and its data inside table physically to the host location and login
> to
> the server and connect USB flash drive and update tables in SQL Server..
> What is the best practice to perform update by going physical to the host
> company for make SQL Server 2005 Database changes?
In both cases, the normal approach is to perform new installations and
upgrades using SQL scripts. You can use a tool like SQLCMD to execute the
scripts from a command file. For upgrades, it is important to test against
a production database replica to ensure the database is properly upgraded.
Hope this helps.
Dan Guzman
SQL Server MVP
"TalalSaleem" <TalalSaleem@.discussions.microsoft.com> wrote in message
news:373C5072-E4D4-4F7C-A827-266DAA35C4E4@.microsoft.com...
> We will going to have no remote access to the SQL Server 2005… we as
> developers can build and test in our place.
> After successfully testing the codes… web pages and SQL Server Database
> new
> tables and stored procedures has to be carried physically in the USB Flash
> disk and required to go to the host company server location and update the
> web pages and SQL Server 2005 Database tables.
> What is the best practice … if the situation is that we have to carry the
> table and its data inside table physically to the host location and login
> to
> the server and connect USB flash drive and update tables in SQL Server..
> What is the best practice to perform update by going physical to the host
> company for make SQL Server 2005 Database changes?
|||On Tue, 15 Jan 2008 07:04:25 -0600, "Dan Guzman"
<guzmanda@.nospam-online.sbcglobal.net> wrote:

>In both cases, the normal approach is to perform new installations and
>upgrades using SQL scripts. You can use a tool like SQLCMD to execute the
>scripts from a command file. For upgrades, it is important to test against
>a production database replica to ensure the database is properly upgraded.
I think he's asking more about data. Say you need to send someone 1gb
of data, to populate a table, to update a database, etc.
I'd say you can use a good old ASCII CSV or flat file, but of course
you need some kind of import logic, typically some staging tables and
an SSIS package, to do the work.
Josh

Best Practice to Simulate Time

Hi,
We have the need to roll the time of the database server forward to perform
some time sensitive testing. The problem is that the database test server
is part of a production Windows 2000 environment. I suppose the optimal
thing to do would be to move the clock on the server forward X number of
hours, and then test our procedures. The problem is that Windows 2000 will
automatically sync up the time because of the Kerberos security. We can't
change the time on all of the servers. Also, when our SQL queries are
retrieving the date/time, it calls GetDate().
Does anybody have any ideas of what we could do to simulate time? I know
one quick and easy answer is to set up a completely separate environment
perhaps with one server. We could put Win2k and SQL Server on that box. It
would be its own domain, so we coould play with the time however we want. I
was wondering if there was a better way of doing this.
Thanks in advance,
cj
Pull the server out of domain and see if it works. You may need to
reconfigure the service account
Thanks
Ravi
"Curtis Justus" wrote:

> Hi,
> We have the need to roll the time of the database server forward to perform
> some time sensitive testing. The problem is that the database test server
> is part of a production Windows 2000 environment. I suppose the optimal
> thing to do would be to move the clock on the server forward X number of
> hours, and then test our procedures. The problem is that Windows 2000 will
> automatically sync up the time because of the Kerberos security. We can't
> change the time on all of the servers. Also, when our SQL queries are
> retrieving the date/time, it calls GetDate().
> Does anybody have any ideas of what we could do to simulate time? I know
> one quick and easy answer is to set up a completely separate environment
> perhaps with one server. We could put Win2k and SQL Server on that box. It
> would be its own domain, so we coould play with the time however we want. I
> was wondering if there was a better way of doing this.
> Thanks in advance,
> cj
>
>
|||Firstly and most importantly, why would you even consider doing this kind of
test on a production server?
I generally make it a rule to avoid writing time-sensitive code precisely
because of the obvious testing problems. If you need to reference the
current date and time then parameterize it or make your own class or
function to retrieve the clock information. That way you have a single point
at which you can interpose your own time value for testing purposes.
David Portas
SQL Server MVP
|||Ravi,
That is what I figured I would have to do. Thanks for the confirmation.
Take care,
cj
"Ravi" <Ravi@.discussions.microsoft.com> wrote in message
news:D4301F76-89AD-4554-9B8F-33EE8C686284@.microsoft.com...[vbcol=seagreen]
> Pull the server out of domain and see if it works. You may need to
> reconfigure the service account
> --
> Thanks
> Ravi
>
> "Curtis Justus" wrote:
|||David,
To answer your first question: there aren't any other production databases
on this "production" server. The only reason why I called it a production
server is because it is located in an active domain. They are converting
from a Netware network to the Win2K-based system and are slowly
transitioning over. I'm sorry for not sharing that information.
The date thing was something I had asked our database people about. However
it is too late in the game to change everything.
With that said, do you have any other suggestions? Perhaps going through
our stored procs and replacing GetDate with a call to a UDF might do the
trick.
Thanks,
cj
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:WpKdnYLmt7VwlV7fRVn-rQ@.giganews.com...
> Firstly and most importantly, why would you even consider doing this kind
> of test on a production server?
> I generally make it a rule to avoid writing time-sensitive code precisely
> because of the obvious testing problems. If you need to reference the
> current date and time then parameterize it or make your own class or
> function to retrieve the clock information. That way you have a single
> point at which you can interpose your own time value for testing purposes.
> --
> David Portas
> SQL Server MVP
> --
>
|||"Curtis Justus" <sure@.you.wont.spam.me.org> wrote in
news:uNp4NMSfFHA.2472@.TK2MSFTNGP15.phx.gbl:

> With that said, do you have any other suggestions? Perhaps going
> through our stored procs and replacing GetDate with a call to a UDF
> might do the trick.
Look out for CURRENT_TIMESTAMP as well.
Here's two crazy ideas:
- find or write a utility that traps all calls to the Windows API that get
the current time, returning a strange result. If necessary, have MSSQL.EXE
be spawned by the utility (instead of the normal service start).
- set the timezone offset to several hundred hours.

Best Practice to Simulate Time

Hi,
We have the need to roll the time of the database server forward to perform
some time sensitive testing. The problem is that the database test server
is part of a production Windows 2000 environment. I suppose the optimal
thing to do would be to move the clock on the server forward X number of
hours, and then test our procedures. The problem is that Windows 2000 will
automatically sync up the time because of the Kerberos security. We can't
change the time on all of the servers. Also, when our SQL queries are
retrieving the date/time, it calls GetDate().
Does anybody have any ideas of what we could do to simulate time? I know
one quick and easy answer is to set up a completely separate environment
perhaps with one server. We could put Win2k and SQL Server on that box. It
would be its own domain, so we coould play with the time however we want. I
was wondering if there was a better way of doing this.
Thanks in advance,
cjPull the server out of domain and see if it works. You may need to
reconfigure the service account
--
Thanks
Ravi
"Curtis Justus" wrote:

> Hi,
> We have the need to roll the time of the database server forward to perfor
m
> some time sensitive testing. The problem is that the database test server
> is part of a production Windows 2000 environment. I suppose the optimal
> thing to do would be to move the clock on the server forward X number of
> hours, and then test our procedures. The problem is that Windows 2000 wil
l
> automatically sync up the time because of the Kerberos security. We can't
> change the time on all of the servers. Also, when our SQL queries are
> retrieving the date/time, it calls GetDate().
> Does anybody have any ideas of what we could do to simulate time? I know
> one quick and easy answer is to set up a completely separate environment
> perhaps with one server. We could put Win2k and SQL Server on that box.
It
> would be its own domain, so we coould play with the time however we want.
I
> was wondering if there was a better way of doing this.
> Thanks in advance,
> cj
>
>|||Firstly and most importantly, why would you even consider doing this kind of
test on a production server?
I generally make it a rule to avoid writing time-sensitive code precisely
because of the obvious testing problems. If you need to reference the
current date and time then parameterize it or make your own class or
function to retrieve the clock information. That way you have a single point
at which you can interpose your own time value for testing purposes.
David Portas
SQL Server MVP
--|||Ravi,
That is what I figured I would have to do. Thanks for the confirmation.
Take care,
cj
"Ravi" <Ravi@.discussions.microsoft.com> wrote in message
news:D4301F76-89AD-4554-9B8F-33EE8C686284@.microsoft.com...[vbcol=seagreen]
> Pull the server out of domain and see if it works. You may need to
> reconfigure the service account
> --
> Thanks
> Ravi
>
> "Curtis Justus" wrote:
>|||David,
To answer your first question: there aren't any other production databases
on this "production" server. The only reason why I called it a production
server is because it is located in an active domain. They are converting
from a Netware network to the Win2K-based system and are slowly
transitioning over. I'm sorry for not sharing that information.
The date thing was something I had asked our database people about. However
it is too late in the game to change everything.
With that said, do you have any other suggestions? Perhaps going through
our stored procs and replacing GetDate with a call to a UDF might do the
trick.
Thanks,
cj
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:WpKdnYLmt7VwlV7fRVn-rQ@.giganews.com...
> Firstly and most importantly, why would you even consider doing this kind
> of test on a production server?
> I generally make it a rule to avoid writing time-sensitive code precisely
> because of the obvious testing problems. If you need to reference the
> current date and time then parameterize it or make your own class or
> function to retrieve the clock information. That way you have a single
> point at which you can interpose your own time value for testing purposes.
> --
> David Portas
> SQL Server MVP
> --
>|||"Curtis Justus" <sure@.you.wont.spam.me.org> wrote in
news:uNp4NMSfFHA.2472@.TK2MSFTNGP15.phx.gbl:

> With that said, do you have any other suggestions? Perhaps going
> through our stored procs and replacing GetDate with a call to a UDF
> might do the trick.
Look out for CURRENT_TIMESTAMP as well.
Here's two crazy ideas:
- find or write a utility that traps all calls to the Windows API that get
the current time, returning a strange result. If necessary, have MSSQL.EXE
be spawned by the utility (instead of the normal service start).
- set the timezone offset to several hundred hours.

Best Practice to Simulate Time

Hi,
We have the need to roll the time of the database server forward to perform
some time sensitive testing. The problem is that the database test server
is part of a production Windows 2000 environment. I suppose the optimal
thing to do would be to move the clock on the server forward X number of
hours, and then test our procedures. The problem is that Windows 2000 will
automatically sync up the time because of the Kerberos security. We can't
change the time on all of the servers. Also, when our SQL queries are
retrieving the date/time, it calls GetDate().
Does anybody have any ideas of what we could do to simulate time? I know
one quick and easy answer is to set up a completely separate environment
perhaps with one server. We could put Win2k and SQL Server on that box. It
would be its own domain, so we coould play with the time however we want. I
was wondering if there was a better way of doing this.
Thanks in advance,
cjPull the server out of domain and see if it works. You may need to
reconfigure the service account
--
Thanks
Ravi
"Curtis Justus" wrote:
> Hi,
> We have the need to roll the time of the database server forward to perform
> some time sensitive testing. The problem is that the database test server
> is part of a production Windows 2000 environment. I suppose the optimal
> thing to do would be to move the clock on the server forward X number of
> hours, and then test our procedures. The problem is that Windows 2000 will
> automatically sync up the time because of the Kerberos security. We can't
> change the time on all of the servers. Also, when our SQL queries are
> retrieving the date/time, it calls GetDate().
> Does anybody have any ideas of what we could do to simulate time? I know
> one quick and easy answer is to set up a completely separate environment
> perhaps with one server. We could put Win2k and SQL Server on that box. It
> would be its own domain, so we coould play with the time however we want. I
> was wondering if there was a better way of doing this.
> Thanks in advance,
> cj
>
>|||Firstly and most importantly, why would you even consider doing this kind of
test on a production server?
I generally make it a rule to avoid writing time-sensitive code precisely
because of the obvious testing problems. If you need to reference the
current date and time then parameterize it or make your own class or
function to retrieve the clock information. That way you have a single point
at which you can interpose your own time value for testing purposes.
--
David Portas
SQL Server MVP
--|||Ravi,
That is what I figured I would have to do. Thanks for the confirmation.
Take care,
cj
"Ravi" <Ravi@.discussions.microsoft.com> wrote in message
news:D4301F76-89AD-4554-9B8F-33EE8C686284@.microsoft.com...
> Pull the server out of domain and see if it works. You may need to
> reconfigure the service account
> --
> Thanks
> Ravi
>
> "Curtis Justus" wrote:
>> Hi,
>> We have the need to roll the time of the database server forward to
>> perform
>> some time sensitive testing. The problem is that the database test
>> server
>> is part of a production Windows 2000 environment. I suppose the optimal
>> thing to do would be to move the clock on the server forward X number of
>> hours, and then test our procedures. The problem is that Windows 2000
>> will
>> automatically sync up the time because of the Kerberos security. We
>> can't
>> change the time on all of the servers. Also, when our SQL queries are
>> retrieving the date/time, it calls GetDate().
>> Does anybody have any ideas of what we could do to simulate time? I know
>> one quick and easy answer is to set up a completely separate environment
>> perhaps with one server. We could put Win2k and SQL Server on that box.
>> It
>> would be its own domain, so we coould play with the time however we want.
>> I
>> was wondering if there was a better way of doing this.
>> Thanks in advance,
>> cj
>>|||David,
To answer your first question: there aren't any other production databases
on this "production" server. The only reason why I called it a production
server is because it is located in an active domain. They are converting
from a Netware network to the Win2K-based system and are slowly
transitioning over. I'm sorry for not sharing that information.
The date thing was something I had asked our database people about. However
it is too late in the game to change everything.
With that said, do you have any other suggestions? Perhaps going through
our stored procs and replacing GetDate with a call to a UDF might do the
trick.
Thanks,
cj
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:WpKdnYLmt7VwlV7fRVn-rQ@.giganews.com...
> Firstly and most importantly, why would you even consider doing this kind
> of test on a production server?
> I generally make it a rule to avoid writing time-sensitive code precisely
> because of the obvious testing problems. If you need to reference the
> current date and time then parameterize it or make your own class or
> function to retrieve the clock information. That way you have a single
> point at which you can interpose your own time value for testing purposes.
> --
> David Portas
> SQL Server MVP
> --
>|||"Curtis Justus" <sure@.you.wont.spam.me.org> wrote in
news:uNp4NMSfFHA.2472@.TK2MSFTNGP15.phx.gbl:
> With that said, do you have any other suggestions? Perhaps going
> through our stored procs and replacing GetDate with a call to a UDF
> might do the trick.
Look out for CURRENT_TIMESTAMP as well.
Here's two crazy ideas:
- find or write a utility that traps all calls to the Windows API that get
the current time, returning a strange result. If necessary, have MSSQL.EXE
be spawned by the utility (instead of the normal service start).
- set the timezone offset to several hundred hours.