Showing posts with label setup. Show all posts
Showing posts with label setup. Show all posts

Sunday, March 25, 2012

Best Replication Method to Use

We want to allow our customer base to be able to access their account
information online. I want to setup a second SQL server so the customers can
use this for looks up. The front end to access this info is web based.
What replication method is the best one to use to update the database say
every 24 hours at night? Thanks!!
if your database is not too large, a snapshot replication maybe best for
you.
else somekind of logshipping will be good too, see the other thread on
simple log shipping.
justin
Scopus69 wrote:
> We want to allow our customer base to be able to access their account
> information online. I want to setup a second SQL server so the customers can
> use this for looks up. The front end to access this info is web based.
> What replication method is the best one to use to update the database say
> every 24 hours at night? Thanks!!
|||I think transactional replication would work for this. However this will
require each table you are replicating to have a primary key.
I am a little confused by the data flow. Are you saying data moves from the
web server SQL Server database to another SQL Server? Or is it moving
internally to the SQL Server supporting the web site.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Scopus69" <Scopus69@.nospam.postalias> wrote in message
news:3D5CC7EA-4702-417E-AE2D-9985B1E4B781@.microsoft.com...
> We want to allow our customer base to be able to access their account
> information online. I want to setup a second SQL server so the customers
> can
> use this for looks up. The front end to access this info is web based.
> What replication method is the best one to use to update the database say
> every 24 hours at night? Thanks!!
|||Sorry for the confusion. The GUI interface to the data is a web interface
that connects to the backend SQL server. What I would like to do is setup
another web & SQL server for our cutomers so they can use it for lookups. I
really don't want them in our prduction DB.
I was wondering what is the best way to get the data off the production SQL
server to the customer SQL server on a nightly basis? I don't think log
shipping will work because it will put the shipped DB in "read only"
So what method would be the best to use? Thanks!
"Hilary Cotter" wrote:

> I think transactional replication would work for this. However this will
> require each table you are replicating to have a primary key.
> I am a little confused by the data flow. Are you saying data moves from the
> web server SQL Server database to another SQL Server? Or is it moving
> internally to the SQL Server supporting the web site.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Scopus69" <Scopus69@.nospam.postalias> wrote in message
> news:3D5CC7EA-4702-417E-AE2D-9985B1E4B781@.microsoft.com...
>
>
|||I think transactional is your best bet.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Scopus69" <Scopus69@.nospam.postalias> wrote in message
news:76F950CE-B1EA-493F-8092-5ED6AA73EC75@.microsoft.com...[vbcol=seagreen]
> Sorry for the confusion. The GUI interface to the data is a web interface
> that connects to the backend SQL server. What I would like to do is
> setup
> another web & SQL server for our cutomers so they can use it for lookups.
> I
> really don't want them in our prduction DB.
> I was wondering what is the best way to get the data off the production
> SQL
> server to the customer SQL server on a nightly basis? I don't think log
> shipping will work because it will put the shipped DB in "read only"
> So what method would be the best to use? Thanks!
> "Hilary Cotter" wrote:
|||I also like Transactional Replication if the data is dynamic at the source
and the users who will be talking to your target server need updated
information as well for their lookups. If current data is not an issue, that
is they don't mind the data being static, then may be snapshot will work.
But then again it depends on how large the data is. For me one way
Transactional seems to fit the bill here.
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:enpFYKCDGHA.1028@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> I think transactional is your best bet.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "Scopus69" <Scopus69@.nospam.postalias> wrote in message
> news:76F950CE-B1EA-493F-8092-5ED6AA73EC75@.microsoft.com...
interface[vbcol=seagreen]
lookups.[vbcol=seagreen]
will[vbcol=seagreen]
based.
>
sql

Sunday, March 11, 2012

best practice for disk paritioning

Can anyone please tell me what the best practice is for SQL 2000 on a
WIndows 2003 server with regards to disk paritioning? My standard setup is
c:/d: on mirrored disk array (c is for OS, d for data) and all other data on
a SAN. What is the best way to configure sql?
What other data is there except data<g>. A logical partition does nothing
for performance if it is on the same drive array as something you are trying
to keep it away from. While C: is great for the OS and SQL Binaries you are
best to place the data and logs on the SAN.
Andrew J. Kelly SQL MVP
"Bad Beagle" <maxwelli@.shaw.ca> wrote in message
news:%23IbisWnzEHA.260@.TK2MSFTNGP11.phx.gbl...
> Can anyone please tell me what the best practice is for SQL 2000 on a
> WIndows 2003 server with regards to disk paritioning? My standard setup
> is c:/d: on mirrored disk array (c is for OS, d for data) and all other
> data on a SAN. What is the best way to configure sql?
>

best practice for disk paritioning

Can anyone please tell me what the best practice is for SQL 2000 on a
WIndows 2003 server with regards to disk paritioning? My standard setup is
c:/d: on mirrored disk array (c is for OS, d for data) and all other data on
a SAN. What is the best way to configure sql?What other data is there except data<g>. A logical partition does nothing
for performance if it is on the same drive array as something you are trying
to keep it away from. While C: is great for the OS and SQL Binaries you are
best to place the data and logs on the SAN.
--
Andrew J. Kelly SQL MVP
"Bad Beagle" <maxwelli@.shaw.ca> wrote in message
news:%23IbisWnzEHA.260@.TK2MSFTNGP11.phx.gbl...
> Can anyone please tell me what the best practice is for SQL 2000 on a
> WIndows 2003 server with regards to disk paritioning? My standard setup
> is c:/d: on mirrored disk array (c is for OS, d for data) and all other
> data on a SAN. What is the best way to configure sql?
>

best practice for disk paritioning

Can anyone please tell me what the best practice is for SQL 2000 on a
WIndows 2003 server with regards to disk paritioning? My standard setup is
c:/d: on mirrored disk array (c is for OS, d for data) and all other data on
a SAN. What is the best way to configure sql?What other data is there except data<g>. A logical partition does nothing
for performance if it is on the same drive array as something you are trying
to keep it away from. While C: is great for the OS and SQL Binaries you are
best to place the data and logs on the SAN.
Andrew J. Kelly SQL MVP
"Bad Beagle" <maxwelli@.shaw.ca> wrote in message
news:%23IbisWnzEHA.260@.TK2MSFTNGP11.phx.gbl...
> Can anyone please tell me what the best practice is for SQL 2000 on a
> WIndows 2003 server with regards to disk paritioning? My standard setup
> is c:/d: on mirrored disk array (c is for OS, d for data) and all other
> data on a SAN. What is the best way to configure sql?
>

Wednesday, March 7, 2012

best option when master db is not available

i want to setup my database to new server machine. i have backup as well as copy of database files (data & log), but i lost my master database from existing server.

so i have 2 ways of having my database

1. restore from backup

2. attach the data files.

which one is best option, when we start afresh on new machine with new master database.

Either method should work. I would prefer the RESTORE.

IF the file location (folder) is different, you may need to add the WITH MOVE option to the RESTORE.

Refer to Books Online, Topic: RESTORE

|||yes both options will work.......i'll advise you to go with restore db from backup but ensure that you start SQL Server in single user mode and then only you can perform restoration..........refer BOL its the best resource|||

thanks for your replies. I have gone thru the BOL.

One thing i need to reconfirm before i proceed, with the loss of master database i lost all the information that the master database hold, in this situation, which of the option will be best.

|||

i feel that there is some confusion in the requirement. ie Whether u r trying to resotre Master Database or User database. To restore a master database of one machine to another machine you have many restriction. OS/SQL SErver Version/Service pack and configuration (if i remember correctly) should be the same. I have also read somewhere that the backup of the same physical machine can ionly be restored(i have never tried this).

To transfer the objects from one server to other there are scripts available. You need to transfer Login/Jobs/DTS. THis is possible through Scripts.

(a) Install new instance of sql server in new machine

(b) transfer the LOgin refer : http://support.microsoft.com/default.aspx/kb/246133

(c) Make script of JOBs and run the script in the destination

(d) use save as option or file object trasfer for DTS

(e) Use Backup /restore for user databases.

Madhu

|||

there should not be any confusion, i want to restore user database.

since my old master db is lost, i lost login, etc. logins i can recreate manually, but i am worried is there is thing else about my User database which is lost along with master db, which i may not recover.

|||

All of the data in the users databases should be intact.

Any Jobs would be in the msdb database, so if you didn't lose msdb, you still have the jobs.

Most likely, the only thing lost is logins.

Saturday, February 25, 2012

Best Development Machine Configuration Recommendations

What is the best / recommended setup for a development system where you want
to develop using .NET 2.0 using VS 2005 against both SSRS 2000 and SSRS
2005?VS 2005 does not create reports that are compatible with SRS 2000. However,
the SRS 2000 report designer works against both SRS 2000 and 2005.
Thanks
Tudor
"Jim" wrote:
> What is the best / recommended setup for a development system where you want
> to develop using .NET 2.0 using VS 2005 against both SSRS 2000 and SSRS
> 2005?
>
>|||But, and this is important, those reports cannot provide for any of the 2005
features like end user sorting, multi-valued parameters etc.
I suggest installing both development environments. To do this you need VS
2003 and VS 2005. They will install side by side and both be usable (that is
what I have done).
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Tudor Trufinescu (MSFT)" <TudorTrufinescuMSFT@.discussions.microsoft.com>
wrote in message news:BEF6807F-D848-4C1E-AC6A-32706D9B8F33@.microsoft.com...
> VS 2005 does not create reports that are compatible with SRS 2000.
> However,
> the SRS 2000 report designer works against both SRS 2000 and 2005.
> Thanks
> Tudor
> "Jim" wrote:
>> What is the best / recommended setup for a development system where you
>> want
>> to develop using .NET 2.0 using VS 2005 against both SSRS 2000 and SSRS
>> 2005?
>>

Friday, February 24, 2012

Best Configuration for a 3 Node SQL 2000 Cluster on Windows 2003?

Ok, I've got the cluster setup and running, but having never done this,
I'm not sure if I'm setting up SQL right... We're trying to migrate
our multitude of SQL Server running on older hardware to the new
cluster, but I want to make sure we don't shoot ourselves in the foot.
Here is what we've got:
Specs:
3 HP BL20P Blade Server (Twin 3.6GHz Xeon, 4GB Ram)
1 HP MSA1000 w/ Twin Fibre Switches (Dual Path Redundancy)
Current Setup:
Windows 2003 Enterprise, 20GB C:, 10GB D: (Pagefile), 37GB E: Data
MSA1000 is currently configured with 4 36GB Arrays (Quorum, 2 for Trans
Logs, and 1 for Backups) and 2 120GB Arrays (Database Data), but there
is about 1.2TB left on the controller for additional space.
Followed all the instructions, Public IPs, Private IPs, etc...
Installed SQL on the first two nodes (SQLCL01 & SQLCL02) and have two
Virtual Servers (Same name, actual server name is longer and unique),
and then two instances, INST1 on SQLCL01, and INST2 on SQLCL02.
SQLCL03 is the failover server which of course is identical to the
first two. We're never expecting to have 2 fail, but we may add a 4th
Server/Node in the future (we have 5 additional slots in our two Blade
Chassis).

>From what I've read you "can" install up to 16 instances into a single
cluster, but obviously I don't know if #1 I should try and install more
than 2 instances, or #2 if I even can. I've tried rerunning the SQL
Setup just to see, and all it lets me do it "modify" the current
install or remove it, it won't let me add another instance.
So... What am I looking at here? Is this the optimum configuration
for now, or can I do more? What about memory? Should I limit each SQL
Server instance to a certain level of RAM, say 3GB? Or possibly 2GB,
incase both the two main servers ever fail and everything gets forced
to the 3rd? These servers will pretty much only be used for SQL,
nothing else, so there isnt' too much worry about applications battling
for memory.
Thanks in advance. I know some of these questions may seem rather
newbish, but I've installed and administered SQL2k before, but never a
cluster... so this is new ground for me and the documentation out
there is not very helpful, most of it refers to SQL2k on Windows 2000,
not Windows Server 2003.
Jon Casimir
Lotsa comments inline.
<kazsmir@.gmail.com> wrote in message
news:1125515891.775160.31810@.z14g2000cwz.googlegro ups.com...
> Ok, I've got the cluster setup and running, but having never done this,
> I'm not sure if I'm setting up SQL right... We're trying to migrate
> our multitude of SQL Server running on older hardware to the new
> cluster, but I want to make sure we don't shoot ourselves in the foot.
> Here is what we've got:
> Specs:
> 3 HP BL20P Blade Server (Twin 3.6GHz Xeon, 4GB Ram)
> 1 HP MSA1000 w/ Twin Fibre Switches (Dual Path Redundancy)
> Current Setup:
> Windows 2003 Enterprise, 20GB C:, 10GB D: (Pagefile), 37GB E: Data
> MSA1000 is currently configured with 4 36GB Arrays (Quorum, 2 for Trans
> Logs, and 1 for Backups) and 2 120GB Arrays (Database Data), but there
> is about 1.2TB left on the controller for additional space.
>
I am not a big fan of blade servers as cluster nodes. Too many single
failure points for what is intended to be a highly available system.
A well-tuned, dedicated SQL server should have minimal need for a paging
file. If you are paging heavily, you have something tuned wrong.
Backups should never be stored on the same host computer or storage array as
the primary data store, even if they are on separate physical disks. Backup
across the net to a file share for immediate use and archive those files to
tape for longer retention periods.

> Followed all the instructions, Public IPs, Private IPs, etc...
> Installed SQL on the first two nodes (SQLCL01 & SQLCL02) and have two
> Virtual Servers (Same name, actual server name is longer and unique),
> and then two instances, INST1 on SQLCL01, and INST2 on SQLCL02.
> SQLCL03 is the failover server which of course is identical to the
> first two. We're never expecting to have 2 fail, but we may add a 4th
> Server/Node in the future (we have 5 additional slots in our two Blade
> Chassis).
This doesn't sound right. You should have each Virtual Server\Instance
combination installed on all nodes on the cluster so you can fail over as
needed. During install time you can select which cluster nodes to install
SQL to. You can set the preferred node order of each Virtual Server later
independently so they start and fail where you choose.
On a multi-node, multi-instance cluster, I usually only worry about
first-order failures. If I have more than one instance go south on me, it
is usually the entire cluster that bombs. If I have one instance with a
problem, somebody competent better be standing in front of it fixing the
problem within 30 minutes. You can adjust memory settings and failover
order at that time.

> cluster, but obviously I don't know if #1 I should try and install more
> than 2 instances, or #2 if I even can. I've tried rerunning the SQL
> Setup just to see, and all it lets me do it "modify" the current
> install or remove it, it won't let me add another instance.
>
SQL won't let you add a new virtual server unless there is at least one
unassigned cluster disk resource to anchor the instance. Whether you
"should" install more instances is another matter. Each instance looks and
acts like a separate server on the network. Generally, multiple instances
in a cluster are used to manage security and performance. In my history, I
find that one instance per node + one spare is an optimal configuration, but
your needs with this server consolidation project may vary.

> So... What am I looking at here? Is this the optimum configuration
> for now, or can I do more? What about memory? Should I limit each SQL
> Server instance to a certain level of RAM, say 3GB? Or possibly 2GB,
> incase both the two main servers ever fail and everything gets forced
> to the 3rd? These servers will pretty much only be used for SQL,
> nothing else, so there isnt' too much worry about applications battling
> for memory.
Since you have paid for Enterprise Edition anyway you should max out the
memory, although your choice of blade servers as hosts may limit that
expansion capability. You will need to consider what happens during a
failover so that you can tolerate "stacking" multiple instances on the same
nost node.

> Thanks in advance. I know some of these questions may seem rather
> newbish, but I've installed and administered SQL2k before, but never a
> cluster... so this is new ground for me and the documentation out
> there is not very helpful, most of it refers to SQL2k on Windows 2000,
> not Windows Server 2003.
Most of the considerations for Windows 2000 clustering apply to Windows
2003, except for some installation gotchas. Unless you are sure something
from Windows 2000 doesn't apply, assume it does.
Now is the best time to ask "dumb" questions. Later, when your "highly
available" database solution that you bet your job on is down is the worst
time.

> Jon Casimir
>
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP

benefits/disadvantages of activex/sql-dmo

i have just setup my main sql server as a central publisher/distributor and a number of laptops (only connect to network once a week) with msde as annonymous pull subscribers, using merge replication. Using windows synchronisation manager i have run the c
ommon conflicts i expect to occur and everything works ok.
Could anyone tell me the benefits/pitfalls of using SQL Merge Control or SQL-DMO, over Windows Syncronisation Manager?
Is there a way during syncronisation to determine which side publisher/subscriber has priority on each conflict as and when they occur?
Please bear in mind this is the first time I have worked with SQL Server and I am the only IT person in a small company so I am avoiding over-complicating things for users as much as possible. These message boards are brilliant for advice from people more
experienced than me.
Thanks for your help
SQL DMO is what the replication wizards use. Under the covers it runs replication stored procedures.
Think of the Active
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"James P" wrote:

> i have just setup my main sql server as a central publisher/distributor and a number of laptops (only connect to network once a week) with msde as annonymous pull subscribers, using merge replication. Using windows synchronisation manager i have run the
common conflicts i expect to occur and everything works ok.
> Could anyone tell me the benefits/pitfalls of using SQL Merge Control or SQL-DMO, over Windows Syncronisation Manager?
> Is there a way during syncronisation to determine which side publisher/subscriber has priority on each conflict as and when they occur?
> Please bear in mind this is the first time I have worked with SQL Server and I am the only IT person in a small company so I am avoiding over-complicating things for users as much as possible. These message boards are brilliant for advice from people mo
re experienced than me.
> Thanks for your help
>
|||sorry that last message was send prematurely.
Think of the ActiveX controls as a lightweight version of SQL DMO. Windows Synchronization Manager uses the ActiveX controls.
Here is a brief rundown of the differences. BTW - I only use SQL DMO, although its more complex to code with, it is more feature rich.
1) If you are building publications, you must use SQL-DMO. You cannot build publications or push subscriptions with ActiveX replication controls.
2) The ActiveX replication controls' functionality is limited to copying subscription databases (but not attaching them), managing the Snapshot and Distribution Agents, creating pull subscriptions, and reinitializing subscriptions.
3) Despite their limitations, the ActiveX replication controls have proven to be far more popular than SQL-DMO is as they contain only three classes, and are simpler to work with .
4) you can't control the ActiveX agents through the agents folder in EM.
To answer your specific question regarding priority in SQL DMO its the priority property of the MergePublication class, in ActiveX its the SubscriptionPriority and SubscriptionPriorityType of the SQLMerge class.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"James P" wrote:

> i have just setup my main sql server as a central publisher/distributor and a number of laptops (only connect to network once a week) with msde as annonymous pull subscribers, using merge replication. Using windows synchronisation manager i have run the
common conflicts i expect to occur and everything works ok.
> Could anyone tell me the benefits/pitfalls of using SQL Merge Control or SQL-DMO, over Windows Syncronisation Manager?
> Is there a way during syncronisation to determine which side publisher/subscriber has priority on each conflict as and when they occur?
> Please bear in mind this is the first time I have worked with SQL Server and I am the only IT person in a small company so I am avoiding over-complicating things for users as much as possible. These message boards are brilliant for advice from people mo
re experienced than me.
> Thanks for your help
>

Thursday, February 16, 2012

behavior of SQL on joined queries

Hi all,

Currently our product has a setup that stores information about
transactions in a transaction table. Additionally, certain transactions
pertain to specific people, and extra information is stored in another
table. So for good or ill, things look like this right now:

create table TransactionHistory (
TrnID int identity (1,1),
TrnDT datetime,
--other information about a basic transaction goes here.
--All transactions have this info
Primary Key Clustered (TrnID)
)

Create Index TrnDTIndex on TransactionHistory(TrnDT)

create table PersonTransactionHistory (
TrnID int,
PersonID int,
--extended data pertaining only to "person" transactions goes
--here. only Person transactions have this
Primary Key Clustered(TrnID),
Foreign Key (TrnID) references TransactionHistory (TrnID)
)

Create Index TrnPersonIDIndex on PersonTransactionHistory(Person)

A query about a group of people over a certain date range might fetch
information like so:

select * from TransactionHistory TH
inner join PersonTransactionHistory PTH
on TH.TrnID = PTH.TrnID
where PTH.PersonID in some criteria
and TH.TrnDT between some date and some date

In my experience, this poses a real problem when trying to run queries
that uses both date and personID criteria. If my guesses are correct this
is because SQL is forced to do one of two things:

1 - Use TrnPersonIDIndex to find all transactions which match the person
criteria, then for each do a lookup in the PersonTransactionHistory to
fetch the TrnID and subsequently do a lookup of the TrnID in the clustered
index of the TransactionHistory Table, and finally determine if a given
transaction also matches the date time criteria.

2 - Use TrnDTIndex to final all transaction matching the date criteria,
and then perform lookups similar to the above, except for personID instead
of datetime.

Compounding this is my suspicion (based on performance comparison of when
I specify which indexes to use in the query vs when I let SQL Server
decide itself) that SQL sometimes chooses a very non optimal course. (Of
course, sometimes it chooses a better course than me - the point is I want
it to always be able to pick a good enough course such that I don't have
to bother specifying). Perhaps the table layout is making it difficult for
SQL Server to find a good query plan in all cases.

Basically I'm trying to determine ways to improve our table design here to
make reporting easier, as this gets painful when running report for
large groups of people during large date ranges. I see a few options based
on my above hypothesis, and am looking for comments and/or corrections.

1 - Add the TrnDT column to the PersonTransactionHistory Table as
well. Then create a foreign key relationship of PersonTransactionHistory
(TrnID, TrnDT) references TransactionHistory (TrnID, TrnDT) and create
indexes on PersonTransactionHistory with (TrnDT, PersonID) and
(PersonID, TrnDT). This seems like it would let SQL Server make
much more efficient execution plans. However, I am unsure if SQL server
can leverage the FK on TrnDT to use those new indexes if I give it a query
like:

select * from TransactionHistory TH
inner join PersonTransactionHistory PTH
on TH.TrnID = PTH.TrnID
where PTH.PersonID in some criteria
and TH.TrnDT between some date and some date

The trick being that SQL server would know that it can use PTH.TrnDT and
TH.TrnDT interchangably because of the foreign key (this would support all
the preexisting existing queries that explicitly named TH.TrnDT - any that
didn't explicitly specify the table would now have ambigious column
names...)

2 - Just coalesce the two tables into one. The original intent was to save
space by not requiring extra columns about Persons for all rows, many of
which did not have anything to do with a particular person (for instance a
contact point going active). In my experience with our product, the end
user's decisions about archiving and purging have a much bigger impact
than this, so in my opinion efficient querying is more important than
space. However I'm not sure if this is an elegant solution either. It also
might require more changes to existing code, although the use of views
might help.

We also run reports based on other criteria (columns I replaced with
comments above) but none of them are as problematic as the situation
above. However, it seems that if I can understand the best way to solve
this, I will be able to leverage that approach if other types of reports
become problematic.

Any opinions would be greatly appreciated. Also any references to good
sources regarding table and index design would be helpful as well (online
or offline references...)

thanks,
DaveMetal Dave (metal@.spam.spam) writes:
> create table TransactionHistory (
> TrnID int identity (1,1),
> TrnDT datetime,
> --other information about a basic transaction goes here.
> --All transactions have this info
> Primary Key Clustered (TrnID)
> )
> Create Index TrnDTIndex on TransactionHistory(TrnDT)
> create table PersonTransactionHistory (
> TrnID int,
> PersonID int,
> --extended data pertaining only to "person" transactions goes
> --here. only Person transactions have this
> Primary Key Clustered(TrnID),
> Foreign Key (TrnID) references TransactionHistory (TrnID)
> )
> Create Index TrnPersonIDIndex on PersonTransactionHistory(Person)

Given your query, it could be a good idea to have the clustered index
on TrnDT and PersonID instead. The main problem now with the queries
is that SQL Server will have to make a choice between Index Seek +
Bookmark Lookup on the one hand, and Clustered Index Scan on the other.
This is a guessing game that does not always end up the best way.

Of course, you may have other queries that are best off with clustering
on the Pkey, but this does not seem likely. (Insertion may however
benefit from a montonically increasing index. A clustered index on
PersonID may cause fragmentation.)

> 1 - Add the TrnDT column to the PersonTransactionHistory Table as
> well. Then create a foreign key relationship of PersonTransactionHistory
> (TrnID, TrnDT) references TransactionHistory (TrnID, TrnDT) and create
> indexes on PersonTransactionHistory with (TrnDT, PersonID) and
> (PersonID, TrnDT). This seems like it would let SQL Server make
> much more efficient execution plans. However, I am unsure if SQL server
> can leverage the FK on TrnDT to use those new indexes if I give it a query
> like:
> select * from TransactionHistory TH
> inner join PersonTransactionHistory PTH
> on TH.TrnID = PTH.TrnID
> where PTH.PersonID in some criteria
> and TH.TrnDT between some date and some date

Well, take a copy of the database and try it!

(But first try changing the clustered index.)

> 2 - Just coalesce the two tables into one. The original intent was to save
> space by not requiring extra columns about Persons for all rows, many of
> which did not have anything to do with a particular person (for instance a
> contact point going active).

Depends a little on the ration. If the PersonTransactionHistory is 50%
of all rows in the main table, collapsing into one is probably the best.
If it's 5%, I don't think it is.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||On Tue, 26 Oct 2004, Erland Sommarskog wrote:

> Given your query, it could be a good idea to have the clustered index
> on TrnDT and PersonID instead. The main problem now with the queries
> is that SQL Server will have to make a choice between Index Seek +
> Bookmark Lookup on the one hand, and Clustered Index Scan on the other.
> This is a guessing game that does not always end up the best way.
> Of course, you may have other queries that are best off with clustering
> on the Pkey, but this does not seem likely. (Insertion may however
> benefit from a montonically increasing index. A clustered index on
> PersonID may cause fragmentation.)

My intuition agrees with you regarding the index in this case. I'm
pretty sure the clustered bookmark scan kills us on many reports. However
I haven't looked with enough depth at the wide variety of queries we use
to know for sure where I should put the clustered index so I'm reserving
judgement for now. I'd also like to study a bit more first so that I
don't replace one hasty decision with another - it might solve ad
individual problem but exacerbate others.

For instance, I think

select * from PersonTransactionHistory PTH
inner join TransactionHistory TH on PTH.TrnID = TH.TrnID
where PTH.PersonID = 12345

would be harmed by moving the TH clustered index from TH.TrnID to
TH.TrnDT, as it would now have to make the same lookup vs scan choice in
order to perform the join. Does that make sound reasonable? And since it's
rare for us to access PTH without the inner join to TH, there are probably
many queries like this.

> > 1 - Add the TrnDT column to the PersonTransactionHistory Table as
> > well. Then create a foreign key relationship of PersonTransactionHistory
> > (TrnID, TrnDT) references TransactionHistory (TrnID, TrnDT) and create
> > indexes on PersonTransactionHistory with (TrnDT, PersonID) and
> > (PersonID, TrnDT). This seems like it would let SQL Server make
> > much more efficient execution plans. However, I am unsure if SQL server
> > can leverage the FK on TrnDT to use those new indexes if I give it a query
> > like:
> > select * from TransactionHistory TH
> > inner join PersonTransactionHistory PTH
> > on TH.TrnID = PTH.TrnID
> > where PTH.PersonID in some criteria
> > and TH.TrnDT between some date and some date
> Well, take a copy of the database and try it!

I appreciate the value of experimentation and normally would do that but
if it didn't work that wouldn't necesarily prove to me that I wasn't
simply doing something wrong like not making the foreign key specific
enough or putting something in my query which made SQL server ignore this
potential valuable relationship. So I was basically wondering if there
were any good docs regarding what types of information SQL Server will and
will no leverage in its choices or whether someone familiar with those
rules had some feedback off the top of their head.

> > 2 - Just coalesce the two tables into one. The original intent was to save
> > space by not requiring extra columns about Persons for all rows, many of
> > which did not have anything to do with a particular person (for instance a
> > contact point going active).
> Depends a little on the ration. If the PersonTransactionHistory is 50%
> of all rows in the main table, collapsing into one is probably the best.
> If it's 5%, I don't think it is.

It's probably between 20% and 40% depending on the particular
installation. It's your rationale that for 50% the space saved is
negligible whereas for 5% is is not? For me it's as more about limiting
the changes to the client software (definitely keeping the tables
separate) vs speeding up queries (possible coalescing) rather than a space
consideration. I did a test once and recall discovering we took up nearly
as much or more space with our indexes than our tables anyway, so
coalescing might make a big space difference anyway. (This amount of index
space suprised me but I'm not sure if there is a good rule of thumb for
how much space indexes should take.)

Rereading the post I probably should have just asked for good table design
references right up front. Any takers?

Thanks for the feedback.

Dave|||Metal Dave (metal@.spam.spam) writes:
> For instance, I think
> select * from PersonTransactionHistory PTH
> inner join TransactionHistory TH on PTH.TrnID = TH.TrnID
> where PTH.PersonID = 12345
> would be harmed by moving the TH clustered index from TH.TrnID to
> TH.TrnDT, as it would now have to make the same lookup vs scan choice in
> order to perform the join. Does that make sound reasonable? And since it's
> rare for us to access PTH without the inner join to TH, there are probably
> many queries like this.

Let's assume for the example that the clustered index in FTH is on PersonID.
Then the join against TH on TrnID will be akin to Index Seek + Bookmark
Lookup, no matter if the index on TrnID is clustered or not. In both
cases you would expect a plan with a Nested Loop join which means that
for each in FTH you look up a row in TH. The only difference if the index
on TrnID is non-clustered, is that you will get a few more reads for
each access. Which indeed is not neglible, since it multiplies with the
number of rows for PersonID.

And just like "SELECT * FROM tbl WHERE nonclusteredcol = @.val" has a
choice between index seek and scan, so have this query. Rather than
nested loop, the optimizer could go for hash or merge join which would
mean a single scan of TH. I would guess that the probability for this is
somewhat higher with a NC index on TrnID.

Of course, you opt to change only FTH, if you like.

> I appreciate the value of experimentation and normally would do that but
> if it didn't work that wouldn't necesarily prove to me that I wasn't
> simply doing something wrong like not making the foreign key specific
> enough or putting something in my query which made SQL server ignore this
> potential valuable relationship. So I was basically wondering if there
> were any good docs regarding what types of information SQL Server will and
> will no leverage in its choices or whether someone familiar with those
> rules had some feedback off the top of their head.

SQL Server does look at constraints, but really how intelligent it is,
I have not dug into. Thus, my encouragement of experimentation.

> It's probably between 20% and 40% depending on the particular
> installation. It's your rationale that for 50% the space saved is
> negligible whereas for 5% is is not?

Actually, I was more thinking in terms of performance, but space and
performance are related. My idea was that with 50%, the space saved is not
worth the extra complexity, and performance may suffer. With 5%, you save a
lot of space, since FTH would be a small table.

Your concern of having to change the client is certainly not one to be
neglected, and if this is costly in development time, I don't think it's
worth it.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Monday, February 13, 2012

beginner: "Login failed for user sa."

Hello,
(sorry for my English...)
Could you help me with a SQL Server 2005 problem?
I had installed SQL Server 2005 and then I tried to setup some application
using SQL Server. Unfortunatelly setup fails because the application cannot
logon to SQL Server in SQL Server Authentication mode (user 'sa', password
'sa'). I checked Server Management Studio Express: I have sa/sa account
(because I've prepared it), but although I can logon in Windows
Authentication mode, I cannot logon in SQL Server Authentication (sa/sa)
because of:

Login failed for user 'sa'. The user is not associated with a trusted SQL
Server connection (Microsoft SQL Server, Error: 18452).

Could you help me plase? I suspect that solution is simple but my experience
is not enough.
Thank you very much.
/RAMtake a look here
http://sqlservercode.blogspot.com/2...reason-not.html|||Andrzej Magdziarz (andrzej.magdziarz@.wp.pl) writes:
> Could you help me with a SQL Server 2005 problem? I had installed SQL
> Server 2005 and then I tried to setup some application using SQL Server.
> Unfortunatelly setup fails because the application cannot logon to SQL
> Server in SQL Server Authentication mode (user 'sa', password 'sa').

That is not a very good password. :-)

> I checked Server Management Studio Express: I have sa/sa account
> (because I've prepared it), but although I can logon in Windows
> Authentication mode, I cannot logon in SQL Server Authentication (sa/sa)
> because of:
> Login failed for user 'sa'. The user is not associated with a trusted SQL
> Server connection (Microsoft SQL Server, Error: 18452).
> Could you help me plase? I suspect that solution is simple but my
> experience is not enough.

By default, SQL Server accepts only logins through Windows authentication,
and you must explicitly permit SQL authentication. Your first chance
to so is during setup, but you can also do this from Mgmt Studio.
Right-click the server itself in the Object Explorer, select Properies
and then find the Security page.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx

Beginner security questions

Thanks in advance for your patience with my beginner questions.
I have a SQL Server instance setup on my XP desktop (the 120 day Evaluation
Edition - V8.00.194). When I installed it I setup the instance with "Windows
Only" authentication.
I have myself and my wife with XP user accounts and also have a guest user a
ccount active. I created a database for practice, and would like allow eithe
r my wife's account or the guest account to login and access the one practic
e database, with select pri
viledge's only on tables. Ideally from these "user" XP accounts, I would lik
e to allow access to my practice database only through the Query Analyzer Wi
ndow. I don't want to allow these accounts to see other databases or system
tables. I don't want them t
o be able to stratup Enterprise Manager or any tool other than Query Analyze
r for the one database.
What are the steps I need to follow to create permissions? After I do these
steps, I want to log in through the guest account and see that my security s
etup works.
I have read the chapters in my textbook on security, but still don't really
get it. I guess I need a simpler example that I can practice for myself.
Your help in teaching a newbie is greatly appreciated.If your goal is to learn how security works by playing with it, the
quickest way is to enable mixed-mode authentication. Then you can
create SQL logins (which are unavailable in Windows Only mode) and
assign them permissions, which you can then test using the Query
Analyzer, which lets you open multiple connections based on different
logins. Once you've tested and debugged security using SQL logins to
mimic your eventual Windows logins, you can delete them and assign
Windows logins to the roles you've created and set your security mode
back to Windows only for production. Not sure which textbook you are
using, but this site has lots of useful resources:
[url]http://www.microsoft.com/sql/techinfo/administration/2000/security/default.asp[/ur
l]
--Mary
On Tue, 30 Mar 2004 12:56:10 -0800, "Jack Wachtler"
<jack_wachtler@.comcast.net> wrote:

>Thanks in advance for your patience with my beginner questions.
>I have a SQL Server instance setup on my XP desktop (the 120 day Evaluation
Edition - V8.00.194). When I installed it I setup the instance with "Window
s Only" authentication.
>I have myself and my wife with XP user accounts and also have a guest user account
active. I created a database for practice, and would like allow either my wife's acc
ount or the guest account to login and access the one practice database, with select
pr
iviledge's only on tables. Ideally from these "user" XP accounts, I would li
ke to allow access to my practice database only through the Query Analyzer W
indow. I don't want to allow these accounts to see other databases or system
tables. I don't want them
to be able to stratup Enterprise Manager or any tool other than Query Analyzer for the one
database.
>What are the steps I need to follow to create permissions? After I do these
steps, I want to log in through the guest account and see that my security
setup works.
>I have read the chapters in my textbook on security, but still don't really
get it. I guess I need a simpler example that I can practice for myself.
>Your help in teaching a newbie is greatly appreciated.|||Since you've installed with "Windows Only" you may have a few steps to take
before you can allow other users to access the database. Normally this
option works if the SQL Server is part of a domain, which I presume your
machines are not part of. You could try to create a user account on the SQL
Server that has the same name as the user your wife is using on her XP, with
the same password. This might work (I'm not sure).
An other option is to change the security to "Mixed mode", and create SQL
Server accounts. You may be able to change the licensing by way of the "SQL
Server 2000 Licensing" icon on the control panel. However the "Mixed mode"
option may be grayed out. In that case you could do one of two things:
1. Reinstall sql server in mixed mode (remember to back up your database
first)
OR
2. You could do this (cut from another posting found on Google):
Change the LoginMode value of the
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MS
SQLServer\MSSQLServer registry key
(for default instance).
1 = Windows Only
2 = SQL Server and Windows
Sincerely
Svein Terje Gaup
"Jack Wachtler" <jack_wachtler@.comcast.net> wrote in message
news:01A128AE-2A11-4719-BB37-DC8B683D662A@.microsoft.com...
> Thanks in advance for your patience with my beginner questions.
> I have a SQL Server instance setup on my XP desktop (the 120 day
Evaluation Edition - V8.00.194). When I installed it I setup the instance
with "Windows Only" authentication.
> I have myself and my wife with XP user accounts and also have a guest user
account active. I created a database for practice, and would like allow
either my wife's account or the guest account to login and access the one
practice database, with select priviledge's only on tables. Ideally from
these "user" XP accounts, I would like to allow access to my practice
database only through the Query Analyzer Window. I don't want to allow these
accounts to see other databases or system tables. I don't want them to be
able to stratup Enterprise Manager or any tool other than Query Analyzer for
the one database.
> What are the steps I need to follow to create permissions? After I do
these steps, I want to log in through the guest account and see that my
security setup works.
> I have read the chapters in my textbook on security, but still don't
really get it. I guess I need a simpler example that I can practice for
myself.
> Your help in teaching a newbie is greatly appreciated.