Showing posts with label log. Show all posts
Showing posts with label log. Show all posts

Tuesday, March 27, 2012

Best schema for replicate a DB without replicate Data

Hi,
We had log shipping from LA to New York for our mission critical databases.
However, the log shipping breaks all the time because some of these databases
or tran logs are huge.
Afer re-evaluation we found that some DBs we don't really need the data
replicated but we do need to keep the schema and objects up to date:
everything except the huge volume of data. It does not need to be real time,
we could do this once a day.
Could someone suggests us what are the better options of doing this? Some
way that is simple without a lot of maintenance issues and potential breaking
points.
Thanks
HuiSeems you simply want to generate schema at regular intervals?
http://www.karaszi.com/SQLServer/info_generate_script.asp
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Christine C" <ChristineC@.discussions.microsoft.com> wrote in message
news:87C67A1A-52DF-4069-AD9B-A8637ADD3B35@.microsoft.com...
> Hi,
> We had log shipping from LA to New York for our mission critical databases.
> However, the log shipping breaks all the time because some of these databases
> or tran logs are huge.
> Afer re-evaluation we found that some DBs we don't really need the data
> replicated but we do need to keep the schema and objects up to date:
> everything except the huge volume of data. It does not need to be real time,
> we could do this once a day.
> Could someone suggests us what are the better options of doing this? Some
> way that is simple without a lot of maintenance issues and potential breaking
> points.
> Thanks
> Hui
>
>|||check out DB Ghost - http://www.dbghost.com
"Christine C" wrote:
> Hi,
> We had log shipping from LA to New York for our mission critical databases.
> However, the log shipping breaks all the time because some of these databases
> or tran logs are huge.
> Afer re-evaluation we found that some DBs we don't really need the data
> replicated but we do need to keep the schema and objects up to date:
> everything except the huge volume of data. It does not need to be real time,
> we could do this once a day.
> Could someone suggests us what are the better options of doing this? Some
> way that is simple without a lot of maintenance issues and potential breaking
> points.
> Thanks
> Hui
>
>|||Sorry, I did not describe the full requirement:
1. We need to copy all changes in the databases (including sysusers, stored
procedures, views, udf, ...etc) except the data. Of course sysusers we do
need the data because we need all the users replicated.
2. One challenges we have is that there are objects (views, stored proc)
that are cross databases. Therefore whatever tools we used need to be able to
not error out because of the dependencies.
3. One of our developers are trying the SQL-DMO APIs to do the
copyalltables. Here are the uncertains for us:
a. He is not sure if SQL-DMO allows him to copy the users, views, and
stored procedures separately as well.
b. Tthe entire extract for replicating three databases (without data) took
50 minutes. Which means the entire time the destination databases are going
to be unusable for a long time.
Common question to all tools in your link:
c. what we need is for the tool to take care of the delta of the source and
destination. The objects that no longer exist in the source, how do we remove
them in the destination if the scripts are driving from the source?
d. The tools suggested in your link, are they mostly scripting tools for
DDLs only (we don't care the data) or some of them can transfer users as
well?
e. Do they require manually adding or changing the script defination
everytime a DB has changed (schema or stored proc)? Can those tools also know
to remove all objects before it proceeds?
I know the list of questions are long, but these are critical processes we
want to setup in production. The problems that we encountered might not be
standards and having huge databases need to replicate to New York for DR
complicates the issue too.
Thank you for your patience in replying.
"Tibor Karaszi" wrote:
> Seems you simply want to generate schema at regular intervals?
> http://www.karaszi.com/SQLServer/info_generate_script.asp
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> http://www.sqlug.se/
>
> "Christine C" <ChristineC@.discussions.microsoft.com> wrote in message
> news:87C67A1A-52DF-4069-AD9B-A8637ADD3B35@.microsoft.com...
> > Hi,
> >
> > We had log shipping from LA to New York for our mission critical databases.
> > However, the log shipping breaks all the time because some of these databases
> > or tran logs are huge.
> >
> > Afer re-evaluation we found that some DBs we don't really need the data
> > replicated but we do need to keep the schema and objects up to date:
> > everything except the huge volume of data. It does not need to be real time,
> > we could do this once a day.
> > Could someone suggests us what are the better options of doing this? Some
> > way that is simple without a lot of maintenance issues and potential breaking
> > points.
> >
> > Thanks
> >
> > Hui
> >
> >
> >
> >
>
>sql

Best schema for replicate a DB without replicate Data

Hi,
We had log shipping from LA to New York for our mission critical databases.
However, the log shipping breaks all the time because some of these databases
or tran logs are huge.
Afer re-evaluation we found that some DBs we don't really need the data
replicated but we do need to keep the schema and objects up to date:
everything except the huge volume of data. It does not need to be real time,
we could do this once a day.
Could someone suggests us what are the better options of doing this? Some
way that is simple without a lot of maintenance issues and potential breaking
points.
Thanks
Hui
Seems you simply want to generate schema at regular intervals?
http://www.karaszi.com/SQLServer/inf...ate_script.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Christine C" <ChristineC@.discussions.microsoft.com> wrote in message
news:87C67A1A-52DF-4069-AD9B-A8637ADD3B35@.microsoft.com...
> Hi,
> We had log shipping from LA to New York for our mission critical databases.
> However, the log shipping breaks all the time because some of these databases
> or tran logs are huge.
> Afer re-evaluation we found that some DBs we don't really need the data
> replicated but we do need to keep the schema and objects up to date:
> everything except the huge volume of data. It does not need to be real time,
> we could do this once a day.
> Could someone suggests us what are the better options of doing this? Some
> way that is simple without a lot of maintenance issues and potential breaking
> points.
> Thanks
> Hui
>
>
|||check out DB Ghost - http://www.dbghost.com
"Christine C" wrote:

> Hi,
> We had log shipping from LA to New York for our mission critical databases.
> However, the log shipping breaks all the time because some of these databases
> or tran logs are huge.
> Afer re-evaluation we found that some DBs we don't really need the data
> replicated but we do need to keep the schema and objects up to date:
> everything except the huge volume of data. It does not need to be real time,
> we could do this once a day.
> Could someone suggests us what are the better options of doing this? Some
> way that is simple without a lot of maintenance issues and potential breaking
> points.
> Thanks
> Hui
>
>
|||Sorry, I did not describe the full requirement:
1. We need to copy all changes in the databases (including sysusers, stored
procedures, views, udf, ...etc) except the data. Of course sysusers we do
need the data because we need all the users replicated.
2. One challenges we have is that there are objects (views, stored proc)
that are cross databases. Therefore whatever tools we used need to be able to
not error out because of the dependencies.
3. One of our developers are trying the SQL-DMO APIs to do the
copyalltables. Here are the uncertains for us:
a. He is not sure if SQL-DMO allows him to copy the users, views, and
stored procedures separately as well.
b. Tthe entire extract for replicating three databases (without data) took
50 minutes. Which means the entire time the destination databases are going
to be unusable for a long time.
Common question to all tools in your link:
c. what we need is for the tool to take care of the delta of the source and
destination. The objects that no longer exist in the source, how do we remove
them in the destination if the scripts are driving from the source?
d. The tools suggested in your link, are they mostly scripting tools for
DDLs only (we don't care the data) or some of them can transfer users as
well?
e. Do they require manually adding or changing the script defination
everytime a DB has changed (schema or stored proc)? Can those tools also know
to remove all objects before it proceeds?
I know the list of questions are long, but these are critical processes we
want to setup in production. The problems that we encountered might not be
standards and having huge databases need to replicate to New York for DR
complicates the issue too.
Thank you for your patience in replying.
"Tibor Karaszi" wrote:

> Seems you simply want to generate schema at regular intervals?
> http://www.karaszi.com/SQLServer/inf...ate_script.asp
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> http://www.sqlug.se/
>
> "Christine C" <ChristineC@.discussions.microsoft.com> wrote in message
> news:87C67A1A-52DF-4069-AD9B-A8637ADD3B35@.microsoft.com...
>
>

Best schema for replicate a DB without replicate Data

Hi,
We had log shipping from LA to New York for our mission critical databases.
However, the log shipping breaks all the time because some of these database
s
or tran logs are huge.
Afer re-evaluation we found that some DBs we don't really need the data
replicated but we do need to keep the schema and objects up to date:
everything except the huge volume of data. It does not need to be real time,
we could do this once a day.
Could someone suggests us what are the better options of doing this? Some
way that is simple without a lot of maintenance issues and potential breakin
g
points.
Thanks
HuiSeems you simply want to generate schema at regular intervals?
http://www.karaszi.com/SQLServer/in...rate_script.asp
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Christine C" <ChristineC@.discussions.microsoft.com> wrote in message
news:87C67A1A-52DF-4069-AD9B-A8637ADD3B35@.microsoft.com...
> Hi,
> We had log shipping from LA to New York for our mission critical databases
.
> However, the log shipping breaks all the time because some of these databa
ses
> or tran logs are huge.
> Afer re-evaluation we found that some DBs we don't really need the data
> replicated but we do need to keep the schema and objects up to date:
> everything except the huge volume of data. It does not need to be real tim
e,
> we could do this once a day.
> Could someone suggests us what are the better options of doing this? Some
> way that is simple without a lot of maintenance issues and potential break
ing
> points.
> Thanks
> Hui
>
>|||check out DB Ghost - http://www.dbghost.com
"Christine C" wrote:

> Hi,
> We had log shipping from LA to New York for our mission critical databases
.
> However, the log shipping breaks all the time because some of these databa
ses
> or tran logs are huge.
> Afer re-evaluation we found that some DBs we don't really need the data
> replicated but we do need to keep the schema and objects up to date:
> everything except the huge volume of data. It does not need to be real tim
e,
> we could do this once a day.
> Could someone suggests us what are the better options of doing this? Some
> way that is simple without a lot of maintenance issues and potential break
ing
> points.
> Thanks
> Hui
>
>|||Sorry, I did not describe the full requirement:
1. We need to copy all changes in the databases (including sysusers, stored
procedures, views, udf, ...etc) except the data. Of course sysusers we do
need the data because we need all the users replicated.
2. One challenges we have is that there are objects (views, stored proc)
that are cross databases. Therefore whatever tools we used need to be able t
o
not error out because of the dependencies.
3. One of our developers are trying the SQL-DMO APIs to do the
copyalltables. Here are the uncertains for us:
a. He is not sure if SQL-DMO allows him to copy the users, views, and
stored procedures separately as well.
b. Tthe entire extract for replicating three databases (without data) took
50 minutes. Which means the entire time the destination databases are going
to be unusable for a long time.
Common question to all tools in your link:
c. what we need is for the tool to take care of the delta of the source and
destination. The objects that no longer exist in the source, how do we remov
e
them in the destination if the scripts are driving from the source?
d. The tools suggested in your link, are they mostly scripting tools for
DDLs only (we don't care the data) or some of them can transfer users as
well?
e. Do they require manually adding or changing the script defination
everytime a DB has changed (schema or stored proc)? Can those tools also kno
w
to remove all objects before it proceeds?
I know the list of questions are long, but these are critical processes we
want to setup in production. The problems that we encountered might not be
standards and having huge databases need to replicate to New York for DR
complicates the issue too.
Thank you for your patience in replying.
"Tibor Karaszi" wrote:

> Seems you simply want to generate schema at regular intervals?
> http://www.karaszi.com/SQLServer/in...rate_script.asp
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> http://www.sqlug.se/
>
> "Christine C" <ChristineC@.discussions.microsoft.com> wrote in message
> news:87C67A1A-52DF-4069-AD9B-A8637ADD3B35@.microsoft.com...
>
>

Sunday, March 25, 2012

Best replication strategy suggestion?

We are in the process of determining a sql db replication strategy. We're
looking at merge replication or log shipping but are in need of more
informed suggestions. It seems like merge replication would be ideal except
i've read that in the event one of the sql servers is unavailable the other
stops accepting data until some elaborate reconfiguration is made. It also
appears that merge replication requires modifying the app tables - not sure
if the vendor will support. Log shipping seems to allow for an easier
reconfiguration of the secondary in the event of a publisher failure but
it's still not very automatic. I've listed the SLA and specifications. Any
ideas or experiences would be most appreciated.
1.) We have two locations and must have a sql server in each location.
2.) There will be a steady stream of inserts only (no updates or deletes) to
one or both sql servers.
3.) The same data must always reside on both servers (or no more than a
minute off)
4.) The application communicates to a virtual dns name that will point to
one or both sql servers via round robin or a load balancer depending upon
the replication strategy.
5.) We would like to load balance across both sql servers and have them
frequently update each others db (maybe merge repl?) but if one of the sql
servers goes down the other must continue functioning with little or know
manual reconfiguration (maybe not merge - maybe log shipping and just one
getting the inserts?).
Thanks,
Jim
Load balancing with SQL Server is extremely hard to do unless it is totally
read only. You are probably better off just having the inserts all go to
one DB and using something like Log shipping or Clustering (if the locations
are close enough) for redundancy. SQL 2005 offers another option called
Database Mirroring that may work in your situation as well.
Andrew J. Kelly SQL MVP
"Jims" <biz@.neocasa.net> wrote in message
news:%23vRv7unfFHA.3692@.TK2MSFTNGP09.phx.gbl...
> We are in the process of determining a sql db replication strategy. We're
> looking at merge replication or log shipping but are in need of more
> informed suggestions. It seems like merge replication would be ideal
> except i've read that in the event one of the sql servers is unavailable
> the other stops accepting data until some elaborate reconfiguration is
> made. It also appears that merge replication requires modifying the app
> tables - not sure if the vendor will support. Log shipping seems to allow
> for an easier reconfiguration of the secondary in the event of a publisher
> failure but it's still not very automatic. I've listed the SLA and
> specifications. Any ideas or experiences would be most appreciated.
> 1.) We have two locations and must have a sql server in each location.
> 2.) There will be a steady stream of inserts only (no updates or deletes)
> to one or both sql servers.
> 3.) The same data must always reside on both servers (or no more than a
> minute off)
> 4.) The application communicates to a virtual dns name that will point to
> one or both sql servers via round robin or a load balancer depending upon
> the replication strategy.
> 5.) We would like to load balance across both sql servers and have them
> frequently update each others db (maybe merge repl?) but if one of the sql
> servers goes down the other must continue functioning with little or know
> manual reconfiguration (maybe not merge - maybe log shipping and just one
> getting the inserts?).
> Thanks,
> Jim
>
|||Hi Jim
Only having inserts probably simplifies things a lot (depending on the db
design).
Another option you might consider, assuming you've got your key generation
strategy worked out well, might be to split your tables into partitioned
views & set up two push transactional publications (one each way) to handle
the replication. The benefit from this implemantion would be that either
node could go down without breaking the system. If you ran them both in
continuous mode, you'd probably be able to meet your < 1 minute latency
requirement without much difficulty as well..
Cheers,
Greg Linwood
SQL Server MVP
"Jims" <biz@.neocasa.net> wrote in message
news:%23vRv7unfFHA.3692@.TK2MSFTNGP09.phx.gbl...
> We are in the process of determining a sql db replication strategy. We're
> looking at merge replication or log shipping but are in need of more
> informed suggestions. It seems like merge replication would be ideal
> except i've read that in the event one of the sql servers is unavailable
> the other stops accepting data until some elaborate reconfiguration is
> made. It also appears that merge replication requires modifying the app
> tables - not sure if the vendor will support. Log shipping seems to allow
> for an easier reconfiguration of the secondary in the event of a publisher
> failure but it's still not very automatic. I've listed the SLA and
> specifications. Any ideas or experiences would be most appreciated.
> 1.) We have two locations and must have a sql server in each location.
> 2.) There will be a steady stream of inserts only (no updates or deletes)
> to one or both sql servers.
> 3.) The same data must always reside on both servers (or no more than a
> minute off)
> 4.) The application communicates to a virtual dns name that will point to
> one or both sql servers via round robin or a load balancer depending upon
> the replication strategy.
> 5.) We would like to load balance across both sql servers and have them
> frequently update each others db (maybe merge repl?) but if one of the sql
> servers goes down the other must continue functioning with little or know
> manual reconfiguration (maybe not merge - maybe log shipping and just one
> getting the inserts?).
> Thanks,
> Jim
>

Best replication strategy suggestion?

We are in the process of determining a sql db replication strategy. We're
looking at merge replication or log shipping but are in need of more
informed suggestions. It seems like merge replication would be ideal except
i've read that in the event one of the sql servers is unavailable the other
stops accepting data until some elaborate reconfiguration is made. It also
appears that merge replication requires modifying the app tables - not sure
if the vendor will support. Log shipping seems to allow for an easier
reconfiguration of the secondary in the event of a publisher failure but
it's still not very automatic. I've listed the SLA and specifications. Any
ideas or experiences would be most appreciated.
1.) We have two locations and must have a sql server in each location.
2.) There will be a steady stream of inserts only (no updates or deletes) to
one or both sql servers.
3.) The same data must always reside on both servers (or no more than a
minute off)
4.) The application communicates to a virtual dns name that will point to
one or both sql servers via round robin or a load balancer depending upon
the replication strategy.
5.) We would like to load balance across both sql servers and have them
frequently update each others db (maybe merge repl?) but if one of the sql
servers goes down the other must continue functioning with little or know
manual reconfiguration (maybe not merge - maybe log shipping and just one
getting the inserts?).
Thanks,
JimLoad balancing with SQL Server is extremely hard to do unless it is totally
read only. You are probably better off just having the inserts all go to
one DB and using something like Log shipping or Clustering (if the locations
are close enough) for redundancy. SQL 2005 offers another option called
Database Mirroring that may work in your situation as well.
Andrew J. Kelly SQL MVP
"Jims" <biz@.neocasa.net> wrote in message
news:%23vRv7unfFHA.3692@.TK2MSFTNGP09.phx.gbl...
> We are in the process of determining a sql db replication strategy. We're
> looking at merge replication or log shipping but are in need of more
> informed suggestions. It seems like merge replication would be ideal
> except i've read that in the event one of the sql servers is unavailable
> the other stops accepting data until some elaborate reconfiguration is
> made. It also appears that merge replication requires modifying the app
> tables - not sure if the vendor will support. Log shipping seems to allow
> for an easier reconfiguration of the secondary in the event of a publisher
> failure but it's still not very automatic. I've listed the SLA and
> specifications. Any ideas or experiences would be most appreciated.
> 1.) We have two locations and must have a sql server in each location.
> 2.) There will be a steady stream of inserts only (no updates or deletes)
> to one or both sql servers.
> 3.) The same data must always reside on both servers (or no more than a
> minute off)
> 4.) The application communicates to a virtual dns name that will point to
> one or both sql servers via round robin or a load balancer depending upon
> the replication strategy.
> 5.) We would like to load balance across both sql servers and have them
> frequently update each others db (maybe merge repl?) but if one of the sql
> servers goes down the other must continue functioning with little or know
> manual reconfiguration (maybe not merge - maybe log shipping and just one
> getting the inserts?).
> Thanks,
> Jim
>|||Hi Jim
Only having inserts probably simplifies things a lot (depending on the db
design).
Another option you might consider, assuming you've got your key generation
strategy worked out well, might be to split your tables into partitioned
views & set up two push transactional publications (one each way) to handle
the replication. The benefit from this implemantion would be that either
node could go down without breaking the system. If you ran them both in
continuous mode, you'd probably be able to meet your < 1 minute latency
requirement without much difficulty as well..
Cheers,
Greg Linwood
SQL Server MVP
"Jims" <biz@.neocasa.net> wrote in message
news:%23vRv7unfFHA.3692@.TK2MSFTNGP09.phx.gbl...
> We are in the process of determining a sql db replication strategy. We're
> looking at merge replication or log shipping but are in need of more
> informed suggestions. It seems like merge replication would be ideal
> except i've read that in the event one of the sql servers is unavailable
> the other stops accepting data until some elaborate reconfiguration is
> made. It also appears that merge replication requires modifying the app
> tables - not sure if the vendor will support. Log shipping seems to allow
> for an easier reconfiguration of the secondary in the event of a publisher
> failure but it's still not very automatic. I've listed the SLA and
> specifications. Any ideas or experiences would be most appreciated.
> 1.) We have two locations and must have a sql server in each location.
> 2.) There will be a steady stream of inserts only (no updates or deletes)
> to one or both sql servers.
> 3.) The same data must always reside on both servers (or no more than a
> minute off)
> 4.) The application communicates to a virtual dns name that will point to
> one or both sql servers via round robin or a load balancer depending upon
> the replication strategy.
> 5.) We would like to load balance across both sql servers and have them
> frequently update each others db (maybe merge repl?) but if one of the sql
> servers goes down the other must continue functioning with little or know
> manual reconfiguration (maybe not merge - maybe log shipping and just one
> getting the inserts?).
> Thanks,
> Jim
>

Best replication strategy suggestion?

We are in the process of determining a sql db replication strategy. We're
looking at merge replication or log shipping but are in need of more
informed suggestions. It seems like merge replication would be ideal except
i've read that in the event one of the sql servers is unavailable the other
stops accepting data until some elaborate reconfiguration is made. It also
appears that merge replication requires modifying the app tables - not sure
if the vendor will support. Log shipping seems to allow for an easier
reconfiguration of the secondary in the event of a publisher failure but
it's still not very automatic. I've listed the SLA and specifications. Any
ideas or experiences would be most appreciated.
1.) We have two locations and must have a sql server in each location.
2.) There will be a steady stream of inserts only (no updates or deletes) to
one or both sql servers.
3.) The same data must always reside on both servers (or no more than a
minute off)
4.) The application communicates to a virtual dns name that will point to
one or both sql servers via round robin or a load balancer depending upon
the replication strategy.
5.) We would like to load balance across both sql servers and have them
frequently update each others db (maybe merge repl?) but if one of the sql
servers goes down the other must continue functioning with little or know
manual reconfiguration (maybe not merge - maybe log shipping and just one
getting the inserts?).
Thanks,
JimLoad balancing with SQL Server is extremely hard to do unless it is totally
read only. You are probably better off just having the inserts all go to
one DB and using something like Log shipping or Clustering (if the locations
are close enough) for redundancy. SQL 2005 offers another option called
Database Mirroring that may work in your situation as well.
Andrew J. Kelly SQL MVP
"Jims" <biz@.neocasa.net> wrote in message
news:%23vRv7unfFHA.3692@.TK2MSFTNGP09.phx.gbl...
> We are in the process of determining a sql db replication strategy. We're
> looking at merge replication or log shipping but are in need of more
> informed suggestions. It seems like merge replication would be ideal
> except i've read that in the event one of the sql servers is unavailable
> the other stops accepting data until some elaborate reconfiguration is
> made. It also appears that merge replication requires modifying the app
> tables - not sure if the vendor will support. Log shipping seems to allow
> for an easier reconfiguration of the secondary in the event of a publisher
> failure but it's still not very automatic. I've listed the SLA and
> specifications. Any ideas or experiences would be most appreciated.
> 1.) We have two locations and must have a sql server in each location.
> 2.) There will be a steady stream of inserts only (no updates or deletes)
> to one or both sql servers.
> 3.) The same data must always reside on both servers (or no more than a
> minute off)
> 4.) The application communicates to a virtual dns name that will point to
> one or both sql servers via round robin or a load balancer depending upon
> the replication strategy.
> 5.) We would like to load balance across both sql servers and have them
> frequently update each others db (maybe merge repl?) but if one of the sql
> servers goes down the other must continue functioning with little or know
> manual reconfiguration (maybe not merge - maybe log shipping and just one
> getting the inserts?).
> Thanks,
> Jim
>|||Hi Jim
Only having inserts probably simplifies things a lot (depending on the db
design).
Another option you might consider, assuming you've got your key generation
strategy worked out well, might be to split your tables into partitioned
views & set up two push transactional publications (one each way) to handle
the replication. The benefit from this implemantion would be that either
node could go down without breaking the system. If you ran them both in
continuous mode, you'd probably be able to meet your < 1 minute latency
requirement without much difficulty as well..
Cheers,
Greg Linwood
SQL Server MVP
"Jims" <biz@.neocasa.net> wrote in message
news:%23vRv7unfFHA.3692@.TK2MSFTNGP09.phx.gbl...
> We are in the process of determining a sql db replication strategy. We're
> looking at merge replication or log shipping but are in need of more
> informed suggestions. It seems like merge replication would be ideal
> except i've read that in the event one of the sql servers is unavailable
> the other stops accepting data until some elaborate reconfiguration is
> made. It also appears that merge replication requires modifying the app
> tables - not sure if the vendor will support. Log shipping seems to allow
> for an easier reconfiguration of the secondary in the event of a publisher
> failure but it's still not very automatic. I've listed the SLA and
> specifications. Any ideas or experiences would be most appreciated.
> 1.) We have two locations and must have a sql server in each location.
> 2.) There will be a steady stream of inserts only (no updates or deletes)
> to one or both sql servers.
> 3.) The same data must always reside on both servers (or no more than a
> minute off)
> 4.) The application communicates to a virtual dns name that will point to
> one or both sql servers via round robin or a load balancer depending upon
> the replication strategy.
> 5.) We would like to load balance across both sql servers and have them
> frequently update each others db (maybe merge repl?) but if one of the sql
> servers goes down the other must continue functioning with little or know
> manual reconfiguration (maybe not merge - maybe log shipping and just one
> getting the inserts?).
> Thanks,
> Jim
>

Thursday, March 22, 2012

Best Practices for Reducing Transaction Log Sizes?

Hi All,
I have a bunch of SQL servers that have a high volume of Bulk Inserts
happening every day (Millions of records per insert)
The most heavily used server has DBs in Simple Recovery Mode. However
both for the database and log files which are set to autogrow 10%, I am
seeing the following characteristics:
database files have about 10-15% free space every night.
log files however are about 70-90% free for all DBs
Now this is a problem since the log file size is 10gb and its using
only 2-3gb of it.
I tried using DBCC shrinkfile TRUNCATEONLY however it not only freed
the space but also reduced the log filesize to 0 and also one of my
developers complained of a table getting truncated during a stored proc
execution at that time.
1. is DBCC shrinkfile a good way to truncate free space from logs? How
could it have truncated the log to a size of 0.
2. What would be a good strategy to free up unused space from DB and
Log files every day?
Thanks all!
Hi
Good stuff to start
http://www.karaszi.com/sqlserver/info_dont_shrink.asp
<theonlyrashid@.gmail.com> wrote in message
news:1128600308.204142.67240@.g44g2000cwa.googlegro ups.com...
> Hi All,
> I have a bunch of SQL servers that have a high volume of Bulk Inserts
> happening every day (Millions of records per insert)
> The most heavily used server has DBs in Simple Recovery Mode. However
> both for the database and log files which are set to autogrow 10%, I am
> seeing the following characteristics:
> database files have about 10-15% free space every night.
> log files however are about 70-90% free for all DBs
> Now this is a problem since the log file size is 10gb and its using
> only 2-3gb of it.
> I tried using DBCC shrinkfile TRUNCATEONLY however it not only freed
> the space but also reduced the log filesize to 0 and also one of my
> developers complained of a table getting truncated during a stored proc
> execution at that time.
> 1. is DBCC shrinkfile a good way to truncate free space from logs? How
> could it have truncated the log to a size of 0.
> 2. What would be a good strategy to free up unused space from DB and
> Log files every day?
> Thanks all!
>

Best Practices for Reducing Transaction Log Sizes?

Hi All,
I have a bunch of SQL servers that have a high volume of Bulk Inserts
happening every day (Millions of records per insert)
The most heavily used server has DBs in Simple Recovery Mode. However
both for the database and log files which are set to autogrow 10%, I am
seeing the following characteristics:
database files have about 10-15% free space every night.
log files however are about 70-90% free for all DBs
Now this is a problem since the log file size is 10gb and its using
only 2-3gb of it.
I tried using DBCC shrinkfile TRUNCATEONLY however it not only freed
the space but also reduced the log filesize to 0 and also one of my
developers complained of a table getting truncated during a stored proc
execution at that time.
1. is DBCC shrinkfile a good way to truncate free space from logs? How
could it have truncated the log to a size of 0.
2. What would be a good strategy to free up unused space from DB and
Log files every day?
Thanks all!Hi
Good stuff to start
http://www.karaszi.com/sqlserver/info_dont_shrink.asp
<theonlyrashid@.gmail.com> wrote in message
news:1128600308.204142.67240@.g44g2000cwa.googlegroups.com...
> Hi All,
> I have a bunch of SQL servers that have a high volume of Bulk Inserts
> happening every day (Millions of records per insert)
> The most heavily used server has DBs in Simple Recovery Mode. However
> both for the database and log files which are set to autogrow 10%, I am
> seeing the following characteristics:
> database files have about 10-15% free space every night.
> log files however are about 70-90% free for all DBs
> Now this is a problem since the log file size is 10gb and its using
> only 2-3gb of it.
> I tried using DBCC shrinkfile TRUNCATEONLY however it not only freed
> the space but also reduced the log filesize to 0 and also one of my
> developers complained of a table getting truncated during a stored proc
> execution at that time.
> 1. is DBCC shrinkfile a good way to truncate free space from logs? How
> could it have truncated the log to a size of 0.
> 2. What would be a good strategy to free up unused space from DB and
> Log files every day?
> Thanks all!
>

Monday, March 19, 2012

Best practice method for shrinking the log file in dev environments

Hi all,
Can someone tell me what the best way to reduce my log file size is when
it gets too big. I can't switch recovery mode to Simple but every so
often I'd like to go in and clear it out.
What is the preffered command to do this?
I've heard the backup command with the TRUNCATE_ONLY isnt the best way
to do this? Is that the case and if so, whats the alternative?
Also, could someone tell me if doing a full backup automatically
truncates the transaction log?
Many thanks
SimonHello,
Could someone tell me if doing a full backup automatically truncates the
transaction log?
NO, FULL database backup will not clear the transaction log. You need to
backup the transaction log backup using BACKUP LOG to clear the log or else
if you do not
want the transaction log backup you could use Backup LOG with TRUNCATE_ONLY
to clear the transaction log from LDF file.
If you do not require a Transaction log backup then change the recovery
model for the database to "SIMPLE", in this
case after the commit the transaction log will be cleared. This recovery
mode will not allow transaction log backup.
In the otherway around, if your data is very critical / production data, set
the recovery model to "FULL". This allows you to perform
a transaction log backup. In this model after the commit the transaction log
still remains in the log file and will get cleared
when you perform a backup of log or issue "Truncate_only". So Truncate_only
is not a good option in production server.
If it is production / critical database follow the steps:-
1. Set the database recovery model to "FULL"
2. Perform a Full database backup once
3. Schedule Transaction log backup using (Backup Log dbname to
disk='d:\backup\dbname.tr1'
4. Perform the step 3 every 30 minutes (decide up on the volume of
transaction), but give new file names each backup dbname.tr1,...tr2...tr3
5. After the step 3 and 4 the transaction log will be cleared from
transaction log file
if you follow this step, even if yor database creach you can recover till
the last transaction log backup as well you can do a PINT_IN_TIME recovery
if needed
If it is non production or data is not critical
1. Set the recovery model to "SIMPLE"
2. Perform a Full database backup daily
3. If needed once in a while you can execute backup log dbname with
truncate_only
If you this methodology we can restore only till last backup.
Thanks
Hari
"Simon" <simon@.nothanks.com> wrote in message
news:%23$ozNe8RHHA.3440@.TK2MSFTNGP03.phx.gbl...
> Hi all,
> Can someone tell me what the best way to reduce my log file size is when
> it gets too big. I can't switch recovery mode to Simple but every so often
> I'd like to go in and clear it out.
> What is the preffered command to do this?
> I've heard the backup command with the TRUNCATE_ONLY isnt the best way to
> do this? Is that the case and if so, whats the alternative?
> Also, could someone tell me if doing a full backup automatically truncates
> the transaction log?
> Many thanks
> Simon|||Thats a great answer - thanks sincerely for your time and advice
Kindest Regards
Simon

Best practice method for shrinking the log file in dev environments

Hi all,
Can someone tell me what the best way to reduce my log file size is when
it gets too big. I can't switch recovery mode to Simple but every so
often I'd like to go in and clear it out.
What is the preffered command to do this?
I've heard the backup command with the TRUNCATE_ONLY isnt the best way
to do this? Is that the case and if so, whats the alternative?
Also, could someone tell me if doing a full backup automatically
truncates the transaction log?
Many thanks
SimonHello,
Could someone tell me if doing a full backup automatically truncates the
transaction log?
NO, FULL database backup will not clear the transaction log. You need to
backup the transaction log backup using BACKUP LOG to clear the log or else
if you do not
want the transaction log backup you could use Backup LOG with TRUNCATE_ONLY
to clear the transaction log from LDF file.
If you do not require a Transaction log backup then change the recovery
model for the database to "SIMPLE", in this
case after the commit the transaction log will be cleared. This recovery
mode will not allow transaction log backup.
In the otherway around, if your data is very critical / production data, set
the recovery model to "FULL". This allows you to perform
a transaction log backup. In this model after the commit the transaction log
still remains in the log file and will get cleared
when you perform a backup of log or issue "Truncate_only". So Truncate_only
is not a good option in production server.
If it is production / critical database follow the steps:-
1. Set the database recovery model to "FULL"
2. Perform a Full database backup once
3. Schedule Transaction log backup using (Backup Log dbname to
disk='d:\backup\dbname.tr1'
4. Perform the step 3 every 30 minutes (decide up on the volume of
transaction), but give new file names each backup dbname.tr1,...tr2...tr3
5. After the step 3 and 4 the transaction log will be cleared from
transaction log file
if you follow this step, even if yor database creach you can recover till
the last transaction log backup as well you can do a PINT_IN_TIME recovery
if needed
If it is non production or data is not critical
1. Set the recovery model to "SIMPLE"
2. Perform a Full database backup daily
3. If needed once in a while you can execute backup log dbname with
truncate_only
If you this methodology we can restore only till last backup.
Thanks
Hari
"Simon" <simon@.nothanks.com> wrote in message
news:%23$ozNe8RHHA.3440@.TK2MSFTNGP03.phx.gbl...
> Hi all,
> Can someone tell me what the best way to reduce my log file size is when
> it gets too big. I can't switch recovery mode to Simple but every so often
> I'd like to go in and clear it out.
> What is the preffered command to do this?
> I've heard the backup command with the TRUNCATE_ONLY isnt the best way to
> do this? Is that the case and if so, whats the alternative?
> Also, could someone tell me if doing a full backup automatically truncates
> the transaction log?
> Many thanks
> Simon|||Thats a great answer - thanks sincerely for your time and advice
Kindest Regards
Simon

Best practice method for shrinking the log file in dev environments

Hi all,
Can someone tell me what the best way to reduce my log file size is when
it gets too big. I can't switch recovery mode to Simple but every so
often I'd like to go in and clear it out.
What is the preffered command to do this?
I've heard the backup command with the TRUNCATE_ONLY isnt the best way
to do this? Is that the case and if so, whats the alternative?
Also, could someone tell me if doing a full backup automatically
truncates the transaction log?
Many thanks
Simon
Hello,
Could someone tell me if doing a full backup automatically truncates the
transaction log?
NO, FULL database backup will not clear the transaction log. You need to
backup the transaction log backup using BACKUP LOG to clear the log or else
if you do not
want the transaction log backup you could use Backup LOG with TRUNCATE_ONLY
to clear the transaction log from LDF file.
If you do not require a Transaction log backup then change the recovery
model for the database to "SIMPLE", in this
case after the commit the transaction log will be cleared. This recovery
mode will not allow transaction log backup.
In the otherway around, if your data is very critical / production data, set
the recovery model to "FULL". This allows you to perform
a transaction log backup. In this model after the commit the transaction log
still remains in the log file and will get cleared
when you perform a backup of log or issue "Truncate_only". So Truncate_only
is not a good option in production server.
If it is production / critical database follow the steps:-
1. Set the database recovery model to "FULL"
2. Perform a Full database backup once
3. Schedule Transaction log backup using (Backup Log dbname to
disk='d:\backup\dbname.tr1'
4. Perform the step 3 every 30 minutes (decide up on the volume of
transaction), but give new file names each backup dbname.tr1,...tr2...tr3
5. After the step 3 and 4 the transaction log will be cleared from
transaction log file
if you follow this step, even if yor database creach you can recover till
the last transaction log backup as well you can do a PINT_IN_TIME recovery
if needed
If it is non production or data is not critical
1. Set the recovery model to "SIMPLE"
2. Perform a Full database backup daily
3. If needed once in a while you can execute backup log dbname with
truncate_only
If you this methodology we can restore only till last backup.
Thanks
Hari
"Simon" <simon@.nothanks.com> wrote in message
news:%23$ozNe8RHHA.3440@.TK2MSFTNGP03.phx.gbl...
> Hi all,
> Can someone tell me what the best way to reduce my log file size is when
> it gets too big. I can't switch recovery mode to Simple but every so often
> I'd like to go in and clear it out.
> What is the preffered command to do this?
> I've heard the backup command with the TRUNCATE_ONLY isnt the best way to
> do this? Is that the case and if so, whats the alternative?
> Also, could someone tell me if doing a full backup automatically truncates
> the transaction log?
> Many thanks
> Simon
|||Thats a great answer - thanks sincerely for your time and advice
Kindest Regards
Simon

Sunday, March 11, 2012

Best practice for DR/Standby server

We use SQL2000 Standard Edition SP3a. We have a
Production Server and a DR Server that is off site.
Currently I have SPs coded that do log shipping to the DR
Server every hour from the log backups on the production
server. Is this the best way to keep the DR Server up to
date? I've explored Replication, but it seemed much more
complex and structure changes don't get replicated which
means a new shipment of the entire Production DBs anytime
a structure change is made. It seems like Replication
would be tough to support for this. There is also
Hardware Replication with the Production SAN to the SAN at
the DR site. That's not something I'm very familar with
or how it works exactly. Is Hardware Replication a better
way to do it? I've been told that they may need real time
updates to the DR Server in the future and I'm not sure if
that is realistic or not. I still think log shipping is
best. I could take it down to shipping every 15 min if
needed. Could someone post some opinions/suggestions for
me.
Thanks,
Van
Van,
If your DR server is only used for DR purposes this is what I would do - you should have less than 30 secs down time if you try what I propose.
1) Place your SQL Server data and log files into two separate LUNs on the SAN at your primary site. Start SQL Server and make sure it's running normally. You may even want to try putting the OS on the SAN too - up to you.
2) Image the server using an imaging tool and restore the image to your secondary server and attach it to your secondary SAN at your DR site.
3) Stop SQL Server service on the secondary server.
4) Set up replication at the block level using SAN replication software - replicate the LUNs that the log and data files reside on. You will need to do a full mirror to start with, and then stream replication from then on.
5) When the primary site goes down, make sure that replication gets stopped (it will stop anyway), perform a failover on your SAN to the secondary site, then start SQL Server on your secondary server. You will find that SQL Server will start normally and
there will be much, much, much less data loss than log shipping. Make sure that replication cannot go from source --> secondary should the source come back online, otherwise you will get data file corruption.
6) Step 5 can be scripted so that you can just "push a button" to fail over. The entire process to fail over shouldn't take more than 30 seconds - 1 minute.
The failback process once the primary is available again is a reverse of the above but in a more controlled manner.
I have implemented this solution across the atlantic and it works very, very well indeed, as long as the process is scripted with all manual steps removed. Also, make sure a DR test is performed regularly. Say goodbye to log shipping.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk

Best practice for DR/Standby server

We use SQL2000 Standard Edition SP3a. We have a
Production Server and a DR Server that is off site.
Currently I have SPs coded that do log shipping to the DR
Server every hour from the log backups on the production
server. Is this the best way to keep the DR Server up to
date? I've explored Replication, but it seemed much more
complex and structure changes don't get replicated which
means a new shipment of the entire Production DBs anytime
a structure change is made. It seems like Replication
would be tough to support for this. There is also
Hardware Replication with the Production SAN to the SAN at
the DR site. That's not something I'm very familar with
or how it works exactly. Is hardware Replication a better
way to do it? I've been told that they may need real time
updates to the DR Server in the future and I'm not sure if
that is realistic or not. I still think log shipping is
best. I could take it down to shipping every 15 min if
needed. Could someone post some opinions/suggestions for
me.
Thanks,
VanVan,
If your DR server is only used for DR purposes this is what I would do - you
should have less than 30 secs down time if you try what I propose.
1) Place your SQL Server data and log files into two separate LUNs on the SA
N at your primary site. Start SQL Server and make sure it's running normally
. You may even want to try putting the OS on the SAN too - up to you.
2) Image the server using an imaging tool and restore the image to your seco
ndary server and attach it to your secondary SAN at your DR site.
3) Stop SQL Server service on the secondary server.
4) Set up replication at the block level using SAN replication software - re
plicate the LUNs that the log and data files reside on. You will need to do
a full mirror to start with, and then stream replication from then on.
5) When the primary site goes down, make sure that replication gets stopped
(it will stop anyway), perform a failover on your SAN to the secondary site,
then start SQL Server on your secondary server. You will find that SQL Serv
er will start normally and
there will be much, much, much less data loss than log shipping. Make sure t
hat replication cannot go from source --> secondary should the source come b
ack online, otherwise you will get data file corruption.
6) Step 5 can be scripted so that you can just "push a button" to fail over.
The entire process to fail over shouldn't take more than 30 seconds - 1 min
ute.
The failback process once the primary is available again is a reverse of the
above but in a more controlled manner.
I have implemented this solution across the atlantic and it works very, very
well indeed, as long as the process is scripted with all manual steps remov
ed. Also, make sure a DR test is performed regularly. Say goodbye to log shi
pping.
Mark Allison, SQL Server MVP
http://www.markallison.co.uk

Best practice for Data size/Log Size ratio

Just wanted to know what is a general rule of thumb when determining log file space against a database's data file.

We allow our data file for our database to grow 10%, unlimited. We do not allow our log file to autogrow due to a specific and poorly written process (which we are in a three month process of remove) that can balloon the log file size.

Should it be 10% of the Data file, i.e. if the Date file size is 800MB the log file should be 8MB?

I realize there are a myraid of factors that go against file size but a general starting point would be nice.

Thanks
Jeff

--
Message posted via http://www.sqlmonster.com"Jeffrey Sheldon via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:5a057600839f49958dcc19b7f175f702@.SQLMonster.c om...
> Just wanted to know what is a general rule of thumb when determining log
> file space against a database's data file.
> We allow our data file for our database to grow 10%, unlimited. We do not
> allow our log file to autogrow due to a specific and poorly written
> process (which we are in a three month process of remove) that can balloon
> the log file size.
> Should it be 10% of the Data file, i.e. if the Date file size is 800MB the
> log file should be 8MB?
> I realize there are a myraid of factors that go against file size but a
> general starting point would be nice.
> Thanks
> Jeff
> --
> Message posted via http://www.sqlmonster.com

I don't believe there's any MS recommendation on this, although I'm happy to
be corrected. I think I read a post from an MVP saying he uses the size of
the largest table plus 10%; personally, with no other information to base a
decision on, I would go for 20% of the size of the data file(s). But as you
say, there are a myriad of factors, so you may as well just pick a number
(do you have another similar database to compare against, perhaps?), and
then adjust it as you go along. Beware of auto-grow/shrink, though, which
can be a bit of a pain if it kicks in at the wrong time - it's probably
better to go with a fixed size and too large rather than too small,
especially since disk space is relatively cheap.

Simon|||Thanks Simon,

I will use our largest database. It is funny that this is really a trail and error process (I remember some formula of calculating the sizes but even that is not an exact science)

Jeff

--
Message posted via http://www.sqlmonster.com|||"Jeffrey Sheldon via SQLMonster.com" <forum@.SQLMonster.com> wrote in
news:83b1d8a99b754bd39177bda2d37d6b64@.SQLMonster.c om:

> Thanks Simon,
> I will use our largest database. It is funny that this is really
> a trail and error process (I remember some formula of calculating
> the sizes but even that is not an exact science)

The log is akin to a journal: it captures all the changes to your
database. To size it, make it large enough to hold all the expected
changes to the database between log dumps. I'd suggest you add a
buffer to the size for the 'unexpected' changes.

Also, if you have a large log, it's not going to impact performance.
You may waste some space but disk space is relatively cheap.

On a different note, when you make a change to a table, the table
isn't really changed, the changes go to the log. Only at
'checkpoint' are the changes in memory synchronized with the table.
But you probably already knew that ...
--
Pablo Sanchez - Blueoak Database Engineering, Inc
http://www.blueoakdb.com|||I agree w/ Pablo in having a large log wont impact performance if you
have space on your server to accomadate this. Your going to take hits
on performance everytime your log has to grow. so if your bd isnt a
high transaction db and there are few changes throughout the day a
smaller log would be fine eg. 10% or something of that nature. DB's w/
high volumes of transactions throughout the day will fill up the log
faster making it autogrow a lot thus hurting performance a little. I
feel everyone's DB's a different in that aspect so you just need to
monitor it for a bit and see what works best for you.

Thursday, March 8, 2012

Best practice analyser tool installation

I installed a sql server best practice analyser tool on
our new windows 2003 server.
I log in as my own username which is a system
admininstrator.
After installation, I found the program is only available
by logging using my own username, if login using other
admin username or other username, I cannot see the program.
Even when I move the shortcut to all users, it gives
message:
This action is only available to currently installed
program
And during installation, it didn't ask to intall for all
users or current users.
How can I make the program available to all users?
Thanks
I think this should work. Try the following from the other (non-setup)
accounts.
regasm.exe $(O)\bpacommon.dll /tlb
regasm.exe $(O)\bpaclient.dll /tlb
- Christian
___________________________
Christian Kleinerman
Program Manager, SQL Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Best practise analyser" <anonymous@.discussions.microsoft.com> wrote in
message news:0a8b01c4903a$462f62b0$a401280a@.phx.gbl...
> I installed a sql server best practice analyser tool on
> our new windows 2003 server.
> I log in as my own username which is a system
> admininstrator.
> After installation, I found the program is only available
> by logging using my own username, if login using other
> admin username or other username, I cannot see the program.
> Even when I move the shortcut to all users, it gives
> message:
> This action is only available to currently installed
> program
> And during installation, it didn't ask to intall for all
> users or current users.
> How can I make the program available to all users?
> Thanks

Wednesday, March 7, 2012

best option when master db is not available

i want to setup my database to new server machine. i have backup as well as copy of database files (data & log), but i lost my master database from existing server.

so i have 2 ways of having my database

1. restore from backup

2. attach the data files.

which one is best option, when we start afresh on new machine with new master database.

Either method should work. I would prefer the RESTORE.

IF the file location (folder) is different, you may need to add the WITH MOVE option to the RESTORE.

Refer to Books Online, Topic: RESTORE

|||yes both options will work.......i'll advise you to go with restore db from backup but ensure that you start SQL Server in single user mode and then only you can perform restoration..........refer BOL its the best resource|||

thanks for your replies. I have gone thru the BOL.

One thing i need to reconfirm before i proceed, with the loss of master database i lost all the information that the master database hold, in this situation, which of the option will be best.

|||

i feel that there is some confusion in the requirement. ie Whether u r trying to resotre Master Database or User database. To restore a master database of one machine to another machine you have many restriction. OS/SQL SErver Version/Service pack and configuration (if i remember correctly) should be the same. I have also read somewhere that the backup of the same physical machine can ionly be restored(i have never tried this).

To transfer the objects from one server to other there are scripts available. You need to transfer Login/Jobs/DTS. THis is possible through Scripts.

(a) Install new instance of sql server in new machine

(b) transfer the LOgin refer : http://support.microsoft.com/default.aspx/kb/246133

(c) Make script of JOBs and run the script in the destination

(d) use save as option or file object trasfer for DTS

(e) Use Backup /restore for user databases.

Madhu

|||

there should not be any confusion, i want to restore user database.

since my old master db is lost, i lost login, etc. logins i can recreate manually, but i am worried is there is thing else about my User database which is lost along with master db, which i may not recover.

|||

All of the data in the users databases should be intact.

Any Jobs would be in the msdb database, so if you didn't lose msdb, you still have the jobs.

Most likely, the only thing lost is logins.

best location for the tran log?

I have a SQL 2000 database on RAID 5 storage. The data files and the
tran log are all on the same RAID 5 volume - the E drive. My only other
option for placement of the tran log is a partition of the system drive
- the D drive. The system drive is mirrored. The operating system
resides on the other logical partition of the system drive - the C
drive. I'm concerned that if I move the tran log to the D drive it
might conflict with operating system activity. This is a dedicated SQL
Server so I'm hoping that it won't be too much overhead on the system
drive to place the tran log there as well. Am I better off moving the
tran log to the system drive or keeping it on the E drive with the rest
of the data?
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!There shouldn't be a problem with sharing a mirrored drive with the OS for
the logs if SQL Server is the only app running on the server.
Andrew J. Kelly SQL MVP
"T Dubya" <timber_toes@.bigfoot.com> wrote in message
news:%23Qx5Um0LFHA.3340@.TK2MSFTNGP14.phx.gbl...
>I have a SQL 2000 database on RAID 5 storage. The data files and the
> tran log are all on the same RAID 5 volume - the E drive. My only other
> option for placement of the tran log is a partition of the system drive
> - the D drive. The system drive is mirrored. The operating system
> resides on the other logical partition of the system drive - the C
> drive. I'm concerned that if I move the tran log to the D drive it
> might conflict with operating system activity. This is a dedicated SQL
> Server so I'm hoping that it won't be too much overhead on the system
> drive to place the tran log there as well. Am I better off moving the
> tran log to the system drive or keeping it on the E drive with the rest
> of the data?
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!|||I'd recommend that you move the transaction log for a couple of reasons.
1. By having the transaction log and data files on the same drives you risk
a loss on failure, or an extension in time to restore.
2. RAID 5 is not as efficient as RAID 1 for performance of the write
intensive transaction log.
If the server is dedicated to SQL server then there should be very little
competition for the drive from other processes.
"T Dubya" wrote:

> I have a SQL 2000 database on RAID 5 storage. The data files and the
> tran log are all on the same RAID 5 volume - the E drive. My only other
> option for placement of the tran log is a partition of the system drive
> - the D drive. The system drive is mirrored. The operating system
> resides on the other logical partition of the system drive - the C
> drive. I'm concerned that if I move the tran log to the D drive it
> might conflict with operating system activity. This is a dedicated SQL
> Server so I'm hoping that it won't be too much overhead on the system
> drive to place the tran log there as well. Am I better off moving the
> tran log to the system drive or keeping it on the E drive with the rest
> of the data?
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
>|||Thanks, Andrew, for the response. I'll give it a try. The main reason
I had reservations about it is because before I realized that the D
drive was in fact just a logical partition of the system drive, I tried
to dump the entire database to that location. The database dump ran
much slower that way than when I dumped it to the same raid 5 where the
data itself resides - about twice as slow, in fact. I just wanted to be
sure that putting the tran logs on the operating system drive wouldn't
cause a similar slow down. Since it is a dedicated SQL Server hopefully
it won't pose a problem. Thanks again.
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!

best location for the tran log?

I have a SQL 2000 database on RAID 5 storage. The data files and the
tran log are all on the same RAID 5 volume - the E drive. My only other
option for placement of the tran log is a partition of the system drive
- the D drive. The system drive is mirrored. The operating system
resides on the other logical partition of the system drive - the C
drive. I'm concerned that if I move the tran log to the D drive it
might conflict with operating system activity. This is a dedicated SQL
Server so I'm hoping that it won't be too much overhead on the system
drive to place the tran log there as well. Am I better off moving the
tran log to the system drive or keeping it on the E drive with the rest
of the data?
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!There shouldn't be a problem with sharing a mirrored drive with the OS for
the logs if SQL Server is the only app running on the server.
--
Andrew J. Kelly SQL MVP
"T Dubya" <timber_toes@.bigfoot.com> wrote in message
news:%23Qx5Um0LFHA.3340@.TK2MSFTNGP14.phx.gbl...
>I have a SQL 2000 database on RAID 5 storage. The data files and the
> tran log are all on the same RAID 5 volume - the E drive. My only other
> option for placement of the tran log is a partition of the system drive
> - the D drive. The system drive is mirrored. The operating system
> resides on the other logical partition of the system drive - the C
> drive. I'm concerned that if I move the tran log to the D drive it
> might conflict with operating system activity. This is a dedicated SQL
> Server so I'm hoping that it won't be too much overhead on the system
> drive to place the tran log there as well. Am I better off moving the
> tran log to the system drive or keeping it on the E drive with the rest
> of the data?
>
> *** Sent via Developersdex http://www.developersdex.com ***
> Don't just participate in USENET...get rewarded for it!|||I'd recommend that you move the transaction log for a couple of reasons.
1. By having the transaction log and data files on the same drives you risk
a loss on failure, or an extension in time to restore.
2. RAID 5 is not as efficient as RAID 1 for performance of the write
intensive transaction log.
If the server is dedicated to SQL server then there should be very little
competition for the drive from other processes.
"T Dubya" wrote:
> I have a SQL 2000 database on RAID 5 storage. The data files and the
> tran log are all on the same RAID 5 volume - the E drive. My only other
> option for placement of the tran log is a partition of the system drive
> - the D drive. The system drive is mirrored. The operating system
> resides on the other logical partition of the system drive - the C
> drive. I'm concerned that if I move the tran log to the D drive it
> might conflict with operating system activity. This is a dedicated SQL
> Server so I'm hoping that it won't be too much overhead on the system
> drive to place the tran log there as well. Am I better off moving the
> tran log to the system drive or keeping it on the E drive with the rest
> of the data?
>
> *** Sent via Developersdex http://www.developersdex.com ***
> Don't just participate in USENET...get rewarded for it!
>

best location for the tran log?

I have a SQL 2000 database on RAID 5 storage. The data files and the
tran log are all on the same RAID 5 volume - the E drive. My only other
option for placement of the tran log is a partition of the system drive
- the D drive. The system drive is mirrored. The operating system
resides on the other logical partition of the system drive - the C
drive. I'm concerned that if I move the tran log to the D drive it
might conflict with operating system activity. This is a dedicated SQL
Server so I'm hoping that it won't be too much overhead on the system
drive to place the tran log there as well. Am I better off moving the
tran log to the system drive or keeping it on the E drive with the rest
of the data?
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
There shouldn't be a problem with sharing a mirrored drive with the OS for
the logs if SQL Server is the only app running on the server.
Andrew J. Kelly SQL MVP
"T Dubya" <timber_toes@.bigfoot.com> wrote in message
news:%23Qx5Um0LFHA.3340@.TK2MSFTNGP14.phx.gbl...
>I have a SQL 2000 database on RAID 5 storage. The data files and the
> tran log are all on the same RAID 5 volume - the E drive. My only other
> option for placement of the tran log is a partition of the system drive
> - the D drive. The system drive is mirrored. The operating system
> resides on the other logical partition of the system drive - the C
> drive. I'm concerned that if I move the tran log to the D drive it
> might conflict with operating system activity. This is a dedicated SQL
> Server so I'm hoping that it won't be too much overhead on the system
> drive to place the tran log there as well. Am I better off moving the
> tran log to the system drive or keeping it on the E drive with the rest
> of the data?
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
|||I'd recommend that you move the transaction log for a couple of reasons.
1. By having the transaction log and data files on the same drives you risk
a loss on failure, or an extension in time to restore.
2. RAID 5 is not as efficient as RAID 1 for performance of the write
intensive transaction log.
If the server is dedicated to SQL server then there should be very little
competition for the drive from other processes.
"T Dubya" wrote:

> I have a SQL 2000 database on RAID 5 storage. The data files and the
> tran log are all on the same RAID 5 volume - the E drive. My only other
> option for placement of the tran log is a partition of the system drive
> - the D drive. The system drive is mirrored. The operating system
> resides on the other logical partition of the system drive - the C
> drive. I'm concerned that if I move the tran log to the D drive it
> might conflict with operating system activity. This is a dedicated SQL
> Server so I'm hoping that it won't be too much overhead on the system
> drive to place the tran log there as well. Am I better off moving the
> tran log to the system drive or keeping it on the E drive with the rest
> of the data?
>
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
>
|||Thanks, Andrew, for the response. I'll give it a try. The main reason
I had reservations about it is because before I realized that the D
drive was in fact just a logical partition of the system drive, I tried
to dump the entire database to that location. The database dump ran
much slower that way than when I dumped it to the same raid 5 where the
data itself resides - about twice as slow, in fact. I just wanted to be
sure that putting the tran logs on the operating system drive wouldn't
cause a similar slow down. Since it is a dedicated SQL Server hopefully
it won't pose a problem. Thanks again.
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!

Friday, February 24, 2012

Benefits of using SQL over XML

Hi,

I have a question relating to XML and SQL. My company currently runs a website which allows its clients to log in, view their accounts and transaction history online. The website is totally read only with the exception of changing passwords.

The data is taken from our back office system overnight which runs an oracle 8i database (we cannot like our website to the database due to the agreement we have in place with our software supplier). The data is written to a CSV file which is then converted into XML. The XML file is saved to the webserver and is referenced by the website.

The structure of the website has a relationship where the Client has a Manager who can see their clients accounts, a Branch level that can see all of their Managers and the underlying clients and then finally a company level that sees everything.

We are finding that using XML is causing a real issue in performance and I was wondering if migrating the website to SQL server would improve the performance of the queries etc .

Any advice would be gratefully recieved

Lee

It really depends on two things: The application and the version of SQL Server you are using. For certain input/retrieval methods, XML can actually be faster than using direct database calls. SQL Server 2005 has native XML features, which you can read more about here:

http://www.sqlsummit.com/People/MRys.htm

Buck

|||

The thing is that our website is taking considerably longer to return results using XML. Our software provider can provide a website which uses Oracle and an example website using test data seems to query and return the data back in far less time then ours using XML. But this site is a lot more costly option and does not provide all the functionalty we require. The main reason for the performance increase is that we want to be able to use the website internally for our branches and front office staff, so performance is key it will have about 20 - 30 users. We are planning to do this because we are unable to restrict access to parts of our back office system from the front office staff. The problem with the performance of the website currently means that the staff will have to deal with a sluggish system.

Our website designer has said that he would have to rewrite the website to change it from XML to SQL, would XQuery be a simpler solution. We are within reason happy to purchase whatever software is required to make this work.

|||

Again, it all depends on how the application is coded. Simply changing from XML to an RDBMS query doesn't guarentee that one will be faster than the other. In other words, you can code an application to be faster in either case.

If performance is key, then for large data sets a database platform might be the way to go. If you need to share data between multiple systems, then XML might be the way to go. It all depends on your needs, but in either case you'll want to evaluate your code to ensure that it is as optimal as possible for your situation.