Sunday, March 25, 2012
Best Replication
has the database and the contents are replicated throughout. The size of
the database is about 20 GB and it is to be updated on weekly basis. The
time frame for replication is limited to 2 hours and all the data must be
updated from the Update DB server within 2 hours given.
What is the best data replication method on this scenario?
Jason,
transactional would be the fastest. Log shipping might be of alternative
interest. I'm assuming that the subscribers don't need to update the data.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
Best Recovery model
The problem is that the SQL Server box only has 512 meg of memory and the tranlog on this database grows tremendously each import and when update queries are run against it. This tends to slow things down a bit on our other databases. We are getting a new SQL Server box but until then, what would be the best recovery model? I currently have it as Bulk-Logged and allow the tranlog to grow by 10% (with a base of 250 meg). The tranlog grows to up to 5-10 gig and in order to shrink it, I have to change the recovery model to Simple, and then back to Bulk-Logged in order to shrink it (I've tried all the dbcc shrinkdatabase, dbcc shrinkfile, dbcc showcontig, and dbcc checkdb commands as well as BACKUP LOG dbName WITH TRUNCACTE_ONLY and nothing will shrink it unless I change the recovery model to simple.)Perhaps you could turn off transaction logging during bulk data loads.|||Thanks for the repy blindman,
I tried that and it slowed down the importing to a crawl (1-2 hours verses 15 minutes). I know I'm pushing the limits on the SQL Server box.
Best Practises for Server Sizing for MS Reporting Services
there isn't much guidance on how to effectively size a server.
Here are my questions:
1) In an environment with 1 reporting server (hosting the reporting
data layer, application, and management layer) connecting to 1 or more
SQL servers (over 2 trunked 1Gbps switches connections), what is the
constraining resource when running large reports against a database of
say 4GB in size? CPU, RAM, DISK, or Network speed on the Reporting
Server? Or the Data hosting SQL server? Or does it depend on code?
2) If I have a Duel Core CPU, do I buy 1 CPU license. MS had responded
to the HyperThreading as 2 virtual CPUs with the statement that you buy
CPU licenses based on socket count effectively. This would imply that
if I buy a 4 Duel Core AMD Opteron Proliant 585 (8 CPU cores in 4
packages / sockets), I can run MS Reporting Services Standard Edition.
Is that true?
Any input or direction would be appreciated.
Paul V.You might want to post in microsoft.public.sqlserver.reportingsvcs to
see if you get a better response.
For licensing issues, there is an FAQ, but if it doesn't answer your
question it would probably be best to contact Microsoft directly for a
definite answer:
http://www.microsoft.com/sql/howtobuy/faq.mspx
Simon
Monday, March 19, 2012
Best practice method for shrinking the log file in dev environments
Can someone tell me what the best way to reduce my log file size is when
it gets too big. I can't switch recovery mode to Simple but every so
often I'd like to go in and clear it out.
What is the preffered command to do this?
I've heard the backup command with the TRUNCATE_ONLY isnt the best way
to do this? Is that the case and if so, whats the alternative?
Also, could someone tell me if doing a full backup automatically
truncates the transaction log?
Many thanks
SimonHello,
Could someone tell me if doing a full backup automatically truncates the
transaction log?
NO, FULL database backup will not clear the transaction log. You need to
backup the transaction log backup using BACKUP LOG to clear the log or else
if you do not
want the transaction log backup you could use Backup LOG with TRUNCATE_ONLY
to clear the transaction log from LDF file.
If you do not require a Transaction log backup then change the recovery
model for the database to "SIMPLE", in this
case after the commit the transaction log will be cleared. This recovery
mode will not allow transaction log backup.
In the otherway around, if your data is very critical / production data, set
the recovery model to "FULL". This allows you to perform
a transaction log backup. In this model after the commit the transaction log
still remains in the log file and will get cleared
when you perform a backup of log or issue "Truncate_only". So Truncate_only
is not a good option in production server.
If it is production / critical database follow the steps:-
1. Set the database recovery model to "FULL"
2. Perform a Full database backup once
3. Schedule Transaction log backup using (Backup Log dbname to
disk='d:\backup\dbname.tr1'
4. Perform the step 3 every 30 minutes (decide up on the volume of
transaction), but give new file names each backup dbname.tr1,...tr2...tr3
5. After the step 3 and 4 the transaction log will be cleared from
transaction log file
if you follow this step, even if yor database creach you can recover till
the last transaction log backup as well you can do a PINT_IN_TIME recovery
if needed
If it is non production or data is not critical
1. Set the recovery model to "SIMPLE"
2. Perform a Full database backup daily
3. If needed once in a while you can execute backup log dbname with
truncate_only
If you this methodology we can restore only till last backup.
Thanks
Hari
"Simon" <simon@.nothanks.com> wrote in message
news:%23$ozNe8RHHA.3440@.TK2MSFTNGP03.phx.gbl...
> Hi all,
> Can someone tell me what the best way to reduce my log file size is when
> it gets too big. I can't switch recovery mode to Simple but every so often
> I'd like to go in and clear it out.
> What is the preffered command to do this?
> I've heard the backup command with the TRUNCATE_ONLY isnt the best way to
> do this? Is that the case and if so, whats the alternative?
> Also, could someone tell me if doing a full backup automatically truncates
> the transaction log?
> Many thanks
> Simon|||Thats a great answer - thanks sincerely for your time and advice
Kindest Regards
Simon
Best practice method for shrinking the log file in dev environments
Can someone tell me what the best way to reduce my log file size is when
it gets too big. I can't switch recovery mode to Simple but every so
often I'd like to go in and clear it out.
What is the preffered command to do this?
I've heard the backup command with the TRUNCATE_ONLY isnt the best way
to do this? Is that the case and if so, whats the alternative?
Also, could someone tell me if doing a full backup automatically
truncates the transaction log?
Many thanks
SimonHello,
Could someone tell me if doing a full backup automatically truncates the
transaction log?
NO, FULL database backup will not clear the transaction log. You need to
backup the transaction log backup using BACKUP LOG to clear the log or else
if you do not
want the transaction log backup you could use Backup LOG with TRUNCATE_ONLY
to clear the transaction log from LDF file.
If you do not require a Transaction log backup then change the recovery
model for the database to "SIMPLE", in this
case after the commit the transaction log will be cleared. This recovery
mode will not allow transaction log backup.
In the otherway around, if your data is very critical / production data, set
the recovery model to "FULL". This allows you to perform
a transaction log backup. In this model after the commit the transaction log
still remains in the log file and will get cleared
when you perform a backup of log or issue "Truncate_only". So Truncate_only
is not a good option in production server.
If it is production / critical database follow the steps:-
1. Set the database recovery model to "FULL"
2. Perform a Full database backup once
3. Schedule Transaction log backup using (Backup Log dbname to
disk='d:\backup\dbname.tr1'
4. Perform the step 3 every 30 minutes (decide up on the volume of
transaction), but give new file names each backup dbname.tr1,...tr2...tr3
5. After the step 3 and 4 the transaction log will be cleared from
transaction log file
if you follow this step, even if yor database creach you can recover till
the last transaction log backup as well you can do a PINT_IN_TIME recovery
if needed
If it is non production or data is not critical
1. Set the recovery model to "SIMPLE"
2. Perform a Full database backup daily
3. If needed once in a while you can execute backup log dbname with
truncate_only
If you this methodology we can restore only till last backup.
Thanks
Hari
"Simon" <simon@.nothanks.com> wrote in message
news:%23$ozNe8RHHA.3440@.TK2MSFTNGP03.phx.gbl...
> Hi all,
> Can someone tell me what the best way to reduce my log file size is when
> it gets too big. I can't switch recovery mode to Simple but every so often
> I'd like to go in and clear it out.
> What is the preffered command to do this?
> I've heard the backup command with the TRUNCATE_ONLY isnt the best way to
> do this? Is that the case and if so, whats the alternative?
> Also, could someone tell me if doing a full backup automatically truncates
> the transaction log?
> Many thanks
> Simon|||Thats a great answer - thanks sincerely for your time and advice
Kindest Regards
Simon
Best practice method for shrinking the log file in dev environments
Can someone tell me what the best way to reduce my log file size is when
it gets too big. I can't switch recovery mode to Simple but every so
often I'd like to go in and clear it out.
What is the preffered command to do this?
I've heard the backup command with the TRUNCATE_ONLY isnt the best way
to do this? Is that the case and if so, whats the alternative?
Also, could someone tell me if doing a full backup automatically
truncates the transaction log?
Many thanks
Simon
Hello,
Could someone tell me if doing a full backup automatically truncates the
transaction log?
NO, FULL database backup will not clear the transaction log. You need to
backup the transaction log backup using BACKUP LOG to clear the log or else
if you do not
want the transaction log backup you could use Backup LOG with TRUNCATE_ONLY
to clear the transaction log from LDF file.
If you do not require a Transaction log backup then change the recovery
model for the database to "SIMPLE", in this
case after the commit the transaction log will be cleared. This recovery
mode will not allow transaction log backup.
In the otherway around, if your data is very critical / production data, set
the recovery model to "FULL". This allows you to perform
a transaction log backup. In this model after the commit the transaction log
still remains in the log file and will get cleared
when you perform a backup of log or issue "Truncate_only". So Truncate_only
is not a good option in production server.
If it is production / critical database follow the steps:-
1. Set the database recovery model to "FULL"
2. Perform a Full database backup once
3. Schedule Transaction log backup using (Backup Log dbname to
disk='d:\backup\dbname.tr1'
4. Perform the step 3 every 30 minutes (decide up on the volume of
transaction), but give new file names each backup dbname.tr1,...tr2...tr3
5. After the step 3 and 4 the transaction log will be cleared from
transaction log file
if you follow this step, even if yor database creach you can recover till
the last transaction log backup as well you can do a PINT_IN_TIME recovery
if needed
If it is non production or data is not critical
1. Set the recovery model to "SIMPLE"
2. Perform a Full database backup daily
3. If needed once in a while you can execute backup log dbname with
truncate_only
If you this methodology we can restore only till last backup.
Thanks
Hari
"Simon" <simon@.nothanks.com> wrote in message
news:%23$ozNe8RHHA.3440@.TK2MSFTNGP03.phx.gbl...
> Hi all,
> Can someone tell me what the best way to reduce my log file size is when
> it gets too big. I can't switch recovery mode to Simple but every so often
> I'd like to go in and clear it out.
> What is the preffered command to do this?
> I've heard the backup command with the TRUNCATE_ONLY isnt the best way to
> do this? Is that the case and if so, whats the alternative?
> Also, could someone tell me if doing a full backup automatically truncates
> the transaction log?
> Many thanks
> Simon
|||Thats a great answer - thanks sincerely for your time and advice
Kindest Regards
Simon
Best practice for storing long text fields
have a table which requires a number of text fields (5 or 6). Each of
these text fields should support a max of 4000 characters. We currently
store the data in varchar columns, which worked fine untill our
appetite for text fields increased to the current requirement of 5, 6
fields of 4000 characters size. I am given to review a design, which
esentially suggests moving the text columns to a separate TextFields
table. The TextFields table will have two columns - a unique reference
and a VARCHAR (4000) column, thus allowing us to crossreference with
the original record. My first impresion is that I'd rather use the SQL
Server 'text' DB type instead, which would allow me the same
functionality with much less effort and possibly better performance.
Can anyone advise on advantages and disadvantages of the two options
and what the best practice in this case would be.
Any advise will be well appreciated.
TzankoTzanko wrote:
Quote:
Originally Posted by
As we all know, there is a 8060 bytes size limit on SQL Server rows. I
have a table which requires a number of text fields (5 or 6). Each of
these text fields should support a max of 4000 characters. We currently
store the data in varchar columns, which worked fine untill our
appetite for text fields increased to the current requirement of 5, 6
fields of 4000 characters size. I am given to review a design, which
esentially suggests moving the text columns to a separate TextFields
table. The TextFields table will have two columns - a unique reference
and a VARCHAR (4000) column, thus allowing us to crossreference with
the original record. My first impresion is that I'd rather use the SQL
Server 'text' DB type instead, which would allow me the same
functionality with much less effort and possibly better performance.
Can anyone advise on advantages and disadvantages of the two options
and what the best practice in this case would be.
I hear that VARCHAR(MAX) is the new TEXT, but it's only available
in SQL 2005.|||Tzanko (tzanko.tzanev@.strategicthought.com) writes:
Quote:
Originally Posted by
As we all know, there is a 8060 bytes size limit on SQL Server rows.
Yes, in SQL 2000. Not in SQL 2005. There a row can span pages.
Quote:
Originally Posted by
I have a table which requires a number of text fields (5 or 6).
Do these text fields hold the same text that spans fields, or are
they different texts?
Quote:
Originally Posted by
I am given to review a design, which esentially suggests moving the text
columns to a separate TextFields table. The TextFields table will have
two columns - a unique reference and a VARCHAR (4000) column, thus
allowing us to crossreference with the original record.
If they are different texts they should be in different columns, or you
should have some type column telling them apatt.
Quote:
Originally Posted by
My first impresion is that I'd rather use the SQL Server 'text' DB type
instead, which would allow me the same functionality with much less
effort and possibly better performance.
Yes, if they the column are all the same text, this might be the way
to go. You can store up to 2GB in a text column.
But better performance? Nah. If nothing else, text is difficult to
work with and there are lot of limitations. As Ed mention, SQL 2005
comes with varchar(MAX) which also can fit 2GB, but which you can
work with in the same way as a regular varchar.
If the columns are different texts, I see little point to use the
text data type.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Are you saying that in SQL 2000 you can Span VarChar's into multiple columns
automatically? If so how?
Cheers, @.sh
"Erland Sommarskog" <esquel@.sommarskog.sewrote in message
news:Xns983CEED4849AEYazorman@.127.0.0.1...
Quote:
Originally Posted by
Tzanko (tzanko.tzanev@.strategicthought.com) writes:
Quote:
Originally Posted by
>As we all know, there is a 8060 bytes size limit on SQL Server rows.
>
Yes, in SQL 2000. Not in SQL 2005. There a row can span pages.
>
Quote:
Originally Posted by
>I have a table which requires a number of text fields (5 or 6).
>
Do these text fields hold the same text that spans fields, or are
they different texts?
>
Quote:
Originally Posted by
>I am given to review a design, which esentially suggests moving the text
>columns to a separate TextFields table. The TextFields table will have
>two columns - a unique reference and a VARCHAR (4000) column, thus
>allowing us to crossreference with the original record.
>
If they are different texts they should be in different columns, or you
should have some type column telling them apatt.
>
Quote:
Originally Posted by
>My first impresion is that I'd rather use the SQL Server 'text' DB type
>instead, which would allow me the same functionality with much less
>effort and possibly better performance.
>
Yes, if they the column are all the same text, this might be the way
to go. You can store up to 2GB in a text column.
>
But better performance? Nah. If nothing else, text is difficult to
work with and there are lot of limitations. As Ed mention, SQL 2005
comes with varchar(MAX) which also can fit 2GB, but which you can
work with in the same way as a regular varchar.
>
If the columns are different texts, I see little point to use the
text data type.
>
>
>
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
>
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||@.sh (spam@.spam.com) writes:
Quote:
Originally Posted by
Are you saying that in SQL 2000 you can Span VarChar's into multiple
columns automatically? If so how?
No. What I said is that on SQL 2005 a row can span pages, so that you can
have more than 8060 bytes per row.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Cool!
"Erland Sommarskog" <esquel@.sommarskog.sewrote in message
news:Xns983D9BE59C16AYazorman@.127.0.0.1...
Quote:
Originally Posted by
@.sh (spam@.spam.com) writes:
Quote:
Originally Posted by
>Are you saying that in SQL 2000 you can Span VarChar's into multiple
>columns automatically? If so how?
>
No. What I said is that on SQL 2005 a row can span pages, so that you can
have more than 8060 bytes per row.
>
>
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
>
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Many thnaks for your replies.
Just to clarify the issue:
The requirement is to create a table that has say 6 columns which store
strings (such as Description, Notes, etc.) Each of these 6 columns
should store a char string of max length of 4000 characters. The
problem is that SQL Server 2000 will not work if I simply defined the
columns as varchar(4000) as at some point the row size reaches the page
size of 8060 and this generates an error. There is a 8060 bytes limit
on SQL Server 2000 rows. Note that I am not trying to store the same
string into 6 different columns spanning from column to column. I have
a separate string to store in each column.
The question:
What is the best way to implement this in SQL Server 2000. In
particular I am looking at two options: Setting each of the 6 columns
to be of type 'text'. Looking at the documentation, it appears that
this would behave for as long as each string is not longer than 4000
characters and I am happy to have this limit. It however is unpleasant
to use the text type for longer than 4000 char strings, as in this case
I understand there are some specific ways of handling the data. Option
two is to create a new LongStrings table with 2 columns - long unique
number and varchar(4000). Each string is stored in this LongStrings
table and is crosreferenced (by using the unique ID) with its original
cell in its original table. Now I'd preffer option 1 (provided I do not
have to do anything special to handle the strings) and would like to
avoid option 2 because it is not easy to write queries to get the data.
Second question is what is the situation with SQL Server 2005. I
understand I can simply define the columns as varchar(max) and do not
have to do anything special. Has someone used this successfully and can
you confirm it ste case?
Thanks for your help.
Tzanko
@.sh wrote:
Quote:
Originally Posted by
Cool!
>
>
"Erland Sommarskog" <esquel@.sommarskog.sewrote in message
news:Xns983D9BE59C16AYazorman@.127.0.0.1...
Quote:
Originally Posted by
@.sh (spam@.spam.com) writes:
Quote:
Originally Posted by
Are you saying that in SQL 2000 you can Span VarChar's into multiple
columns automatically? If so how?
No. What I said is that on SQL 2005 a row can span pages, so that you can
have more than 8060 bytes per row.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx
Quote:
Originally Posted by
The question:
What is the best way to implement this in SQL Server 2000. In
particular I am looking at two options: Setting each of the 6 columns
to be of type 'text'. Looking at the documentation, it appears that
this would behave for as long as each string is not longer than 4000
characters and I am happy to have this limit. It however is unpleasant
to use the text type for longer than 4000 char strings, as in this case
I understand there are some specific ways of handling the data. Option
two is to create a new LongStrings table with 2 columns - long unique
number and varchar(4000). Each string is stored in this LongStrings
table and is crosreferenced (by using the unique ID) with its original
cell in its original table. Now I'd preffer option 1 (provided I do not
have to do anything special to handle the strings) and would like to
avoid option 2 because it is not easy to write queries to get the data.
The best in my opinion is to create two or three new tables and rename
the existing tbable, and the create a view that unifies them all. Then in
SQL 2005 you can scrap the view, and move the columns back to the mother
table. Very litte code would actually be affected.
If the key of the table is (cola, colb) the new tables should also have
the keys (cola, colb). Simply, what you do is that you split the columns
over several tables.
You should consider text or varchar(max) if you really need to fit more
than 8000 characters.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx
Sunday, March 11, 2012
Best practice for Data size/Log Size ratio
We allow our data file for our database to grow 10%, unlimited. We do not allow our log file to autogrow due to a specific and poorly written process (which we are in a three month process of remove) that can balloon the log file size.
Should it be 10% of the Data file, i.e. if the Date file size is 800MB the log file should be 8MB?
I realize there are a myraid of factors that go against file size but a general starting point would be nice.
Thanks
Jeff
--
Message posted via http://www.sqlmonster.com"Jeffrey Sheldon via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:5a057600839f49958dcc19b7f175f702@.SQLMonster.c om...
> Just wanted to know what is a general rule of thumb when determining log
> file space against a database's data file.
> We allow our data file for our database to grow 10%, unlimited. We do not
> allow our log file to autogrow due to a specific and poorly written
> process (which we are in a three month process of remove) that can balloon
> the log file size.
> Should it be 10% of the Data file, i.e. if the Date file size is 800MB the
> log file should be 8MB?
> I realize there are a myraid of factors that go against file size but a
> general starting point would be nice.
> Thanks
> Jeff
> --
> Message posted via http://www.sqlmonster.com
I don't believe there's any MS recommendation on this, although I'm happy to
be corrected. I think I read a post from an MVP saying he uses the size of
the largest table plus 10%; personally, with no other information to base a
decision on, I would go for 20% of the size of the data file(s). But as you
say, there are a myriad of factors, so you may as well just pick a number
(do you have another similar database to compare against, perhaps?), and
then adjust it as you go along. Beware of auto-grow/shrink, though, which
can be a bit of a pain if it kicks in at the wrong time - it's probably
better to go with a fixed size and too large rather than too small,
especially since disk space is relatively cheap.
Simon|||Thanks Simon,
I will use our largest database. It is funny that this is really a trail and error process (I remember some formula of calculating the sizes but even that is not an exact science)
Jeff
--
Message posted via http://www.sqlmonster.com|||"Jeffrey Sheldon via SQLMonster.com" <forum@.SQLMonster.com> wrote in
news:83b1d8a99b754bd39177bda2d37d6b64@.SQLMonster.c om:
> Thanks Simon,
> I will use our largest database. It is funny that this is really
> a trail and error process (I remember some formula of calculating
> the sizes but even that is not an exact science)
The log is akin to a journal: it captures all the changes to your
database. To size it, make it large enough to hold all the expected
changes to the database between log dumps. I'd suggest you add a
buffer to the size for the 'unexpected' changes.
Also, if you have a large log, it's not going to impact performance.
You may waste some space but disk space is relatively cheap.
On a different note, when you make a change to a table, the table
isn't really changed, the changes go to the log. Only at
'checkpoint' are the changes in memory synchronized with the table.
But you probably already knew that ...
--
Pablo Sanchez - Blueoak Database Engineering, Inc
http://www.blueoakdb.com|||I agree w/ Pablo in having a large log wont impact performance if you
have space on your server to accomadate this. Your going to take hits
on performance everytime your log has to grow. so if your bd isnt a
high transaction db and there are few changes throughout the day a
smaller log would be fine eg. 10% or something of that nature. DB's w/
high volumes of transactions throughout the day will fill up the log
faster making it autogrow a lot thus hurting performance a little. I
feel everyone's DB's a different in that aspect so you just need to
monitor it for a bit and see what works best for you.
Thursday, March 8, 2012
Best Practice & Other
I have the following replication setup:
Replication Type: Transactional
Database Size: Circa 35gb
Articles: All articles are published and are required to be at the
subscriber.
Server1: Publisher, SQL 2000 Sp3 (W2k3 sp1)
Server2: Distributor, SQL 2000 Sp4 (W2k3 R2 sp1)
Server3: Subscriber SQL 2000 Sp3 (W2k3 sp1)
Server 3 has the pull subscription.
Other Info: Server 2 did have SQL2005 installed. It's since been
uninstalled and resolved some transactional issues.
This is the environment that I have inherited.
Based on reading around, this appears to be an acceptable best
practice method. I might press against throwing SP4 on server 1 and
server 3.
Are there any amazing troubleshooting tips for this process around?
there are times when the transactional replication does not work. I
think it times out so I'm looking at changing the time out to 3000
seconds.
One issue I've found is that there was a duplicated transaction that
managed to get through. This crashed the envrionment and there was no
choice but to create a new snapshot - it takes hours. How could we
best avoid this? If we deleted these transactions from the subscriber
(difficult due to referential integrity) how would the system know to
replicate them again?
Another one that we've had is where the old transactions are held in
the log. I need to flush them out so that the log can be truncated and
then reduced in size. Any further hints and tips?
I'm simply after some good practice methods to help troubleshoot and
plan the replication process since we're investing time and resource
in it rather heavily. We'll move to 2005 once things are stable.
Thanks in advance for the help.
Simon
Hi Paul
Thanks for the advice. Can you think of any really good
troubleshooting methods when things are gone wrong. I don't mind how
generic they are, it's just good to excercise the brain on new ways of
handling problems.
Cheers
Simon
Wednesday, March 7, 2012
Best Performance Strip setting RAID
The database is 90% used for read actions. Only during night complete refill of data and write actions only for statistics. Any advise is welcome on this subjectSince SQL server pages are 64K the raid stripe settings must also be set to 64K|||SQL 6.5 uses 2k pages and SQL 2000 uses 8k pages.
The stripe size of your RAID drives does NOT have to follow the page size, however I would use a RAID stripe size >= to my page size.