Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Thursday, March 29, 2012

Best tool for Performance Tuning

Which is the best third party tool for SQL Server 2005 Performance Tuning/Optimization.

We purchased the Toad for SQL Server from Quest sofwater but it works only with dbo schemas....So if ur users were in schema xyz then it would not recognize it because they develop the tool for 2005 based on SQL Server 2000 where the schema ownesrhip is tied to the users.

Have you tried the Database Engine Tuning Advisor in Management Studio? I've not used it a great deal but what I have done seemed quite good. It also doesn't require an extra purchase.

sql

Tuesday, March 27, 2012

Best solution: SQLXML, XML Data Binding, or MSXML?

I need some help to determine what the best solution would be for the
following scenarios where the dataset is large and performance is key:
Scenario #1
I am trying to retrieve relational data in XML from a SQL Server 2000 stored
proc. My client is a VBA Access application. I want to be able to map the xml
elements to db tables/columns and use a stored proc to collect the data and
build the xml. Validate it against the xsd. I am assuming that with large
datasets it would be faster to create the xml on the server than on the
client - desktop pcs - since I would have to loop through data rows to
create the xml using MSXML on the client, thoughts. What about annotated
schemas. Do you have to use along with XPath queries even if your result is
already filtered in an SQL sproc.
Scenario #2
I have a .Net webservice that will be the recepient of this xml and validate
it and persisit it to SQL 2K. After looking at all available options it seems
to me that using XML Data Binding & creating .Net classes based on the xsd &
loading this .Net object from xml is the cleanest solution. Reading into
SQLXML, it seems as though every method to persist data to SQL Server uses
the COM-based SQLXMLOLEDB provider and my code will have to jump from CLR to
COM. Cant see how that is good. And MSXML is also COM based parser.
Thoughts?
Scenario #1: I would look at the FOR XML functionality of SQL Server 2000 if
you want to do it in the database. Annotated schemas do not work with stored
procs. You can look into the client-side FOR XML of SQLXML 3.0 if you want
to run FOR XML over the result of a stored proc.
Re Scenario #2: The SQLXML Bulkload object can be called through the managed
providers (in the latest SP of SQLXML 3.0). You will go through COM interop,
which you don't have to for the data binding. But I think the loading may
still be faster due to the bulkload mechanism. I would suggest doing some
perf tests for your specific data...
Best regards
Michael
"Aazihh" <nowayjose@.newsgroupms.com> wrote in message
news:D56903DB-3984-4EE4-9CE1-5E1CCC9FBEA6@.microsoft.com...
>I need some help to determine what the best solution would be for the
> following scenarios where the dataset is large and performance is key:
> Scenario #1
> I am trying to retrieve relational data in XML from a SQL Server 2000
> stored
> proc. My client is a VBA Access application. I want to be able to map the
> xml
> elements to db tables/columns and use a stored proc to collect the data
> and
> build the xml. Validate it against the xsd. I am assuming that with large
> datasets it would be faster to create the xml on the server than on the
> client - desktop pcs - since I would have to loop through data rows to
> create the xml using MSXML on the client, thoughts. What about annotated
> schemas. Do you have to use along with XPath queries even if your result
> is
> already filtered in an SQL sproc.
> Scenario #2
> I have a .Net webservice that will be the recepient of this xml and
> validate
> it and persisit it to SQL 2K. After looking at all available options it
> seems
> to me that using XML Data Binding & creating .Net classes based on the xsd
> &
> loading this .Net object from xml is the cleanest solution. Reading into
> SQLXML, it seems as though every method to persist data to SQL Server uses
> the COM-based SQLXMLOLEDB provider and my code will have to jump from CLR
> to
> COM. Cant see how that is good. And MSXML is also COM based parser.
> Thoughts?

Best solution: SQLXML, XML Data Binding, or MSXML?

I need some help to determine what the best solution would be for the
following scenarios where the dataset is large and performance is key:
Scenario #1
I am trying to retrieve relational data in XML from a SQL Server 2000 stored
proc. My client is a VBA Access application. I want to be able to map the xm
l
elements to db tables/columns and use a stored proc to collect the data and
build the xml. Validate it against the xsd. I am assuming that with large
datasets it would be faster to create the xml on the server than on the
client - desktop pcs - since I would have to loop through data rows to
create the xml using MSXML on the client, thoughts. What about annotated
schemas. Do you have to use along with XPath queries even if your result is
already filtered in an SQL sproc.
Scenario #2
I have a .Net webservice that will be the recepient of this xml and validate
it and persisit it to SQL 2K. After looking at all available options it seem
s
to me that using XML Data Binding & creating .Net classes based on the xsd &
loading this .Net object from xml is the cleanest solution. Reading into
SQLXML, it seems as though every method to persist data to SQL Server uses
the COM-based SQLXMLOLEDB provider and my code will have to jump from CLR to
COM. Cant see how that is good. And MSXML is also COM based parser.
Thoughts?Scenario #1: I would look at the FOR XML functionality of SQL Server 2000 if
you want to do it in the database. Annotated schemas do not work with stored
procs. You can look into the client-side FOR XML of SQLXML 3.0 if you want
to run FOR XML over the result of a stored proc.
Re Scenario #2: The SQLXML Bulkload object can be called through the managed
providers (in the latest SP of SQLXML 3.0). You will go through COM interop,
which you don't have to for the data binding. But I think the loading may
still be faster due to the bulkload mechanism. I would suggest doing some
perf tests for your specific data...
Best regards
Michael
"Aazihh" <nowayjose@.newsgroupms.com> wrote in message
news:D56903DB-3984-4EE4-9CE1-5E1CCC9FBEA6@.microsoft.com...
>I need some help to determine what the best solution would be for the
> following scenarios where the dataset is large and performance is key:
> Scenario #1
> I am trying to retrieve relational data in XML from a SQL Server 2000
> stored
> proc. My client is a VBA Access application. I want to be able to map the
> xml
> elements to db tables/columns and use a stored proc to collect the data
> and
> build the xml. Validate it against the xsd. I am assuming that with large
> datasets it would be faster to create the xml on the server than on the
> client - desktop pcs - since I would have to loop through data rows to
> create the xml using MSXML on the client, thoughts. What about annotated
> schemas. Do you have to use along with XPath queries even if your result
> is
> already filtered in an SQL sproc.
> Scenario #2
> I have a .Net webservice that will be the recepient of this xml and
> validate
> it and persisit it to SQL 2K. After looking at all available options it
> seems
> to me that using XML Data Binding & creating .Net classes based on the xsd
> &
> loading this .Net object from xml is the cleanest solution. Reading into
> SQLXML, it seems as though every method to persist data to SQL Server uses
> the COM-based SQLXMLOLEDB provider and my code will have to jump from CLR
> to
> COM. Cant see how that is good. And MSXML is also COM based parser.
> Thoughts?

Thursday, March 22, 2012

best practices to optimize a DB ?

What are the best pactices to optimize performance accsiing an SQLServer DB ?
commands, mantenance plan...
ThanksWow. That's a pretty wide open question. Here are some thoughts:

1a. Make sure you know what your backup and recovery plan is; performance isn't worth too much if a single point of failure or a user error causes an extended period of downtime.
1b. Make sure you TEST your recovery plan. If you don't, it's not a plan (it's something between a hope and a dream).
2. Make sure your performance and monitoring effort is in keeping with the priorities of the business; you can spend a LOT of time and money in this area, but if it isn't the business' priority, then you're wasting it.
3. Read, read, read.

MS SQL Server 2000 DBA Survival Guide (SAMS, 2000)
Microsoft SQL Server 2000 Operations Guide (Microsoft, 2001)
Inside SQL Server 2000 (Microsoft Press, 2000)
Books On Line

Regards,

hmscott

Wednesday, March 7, 2012

Best Performance?

Assume a heavy transaction environment.
Let's say we have the same database model,
same hardware, file usage, type of data, queries etc. on two servers.
Server A is configured with a RAID 5 on all the data disks (9 of them),
and Server B is configured with RAID 5 on the data disks (5 of them) but
transaction log and tempdb is separated to its own mirrored disks (4 of
them - 2 each).
Which SQL server will run faster in terms of query response time AND write
performance? Database in Server A or Server B?Assumimng disc io is the limiting factor then I would say B would be =most likely to outperform A. But even better would be lose the raid 5 =and use 0+1 on 4 discs. or possibly on 6 and lose the separation of =tempdb. Of course this will lose you some space over the raid 5, but =will perform better. Hardware raid vs S/W will alo make a difference. =How much separating tempdb gives you totally depends on the number of =queries generating large temporary results.
Opinions are likely to vary a lot and you haven't given much info to go =on.
Mike John
"Rick" <bob@.bob.net> wrote in message =news:3f858779$0$103$8f4e7992@.newsreader.goldengate.net...
> Assume a heavy transaction environment.
> > Let's say we have the same database model,
> same hardware, file usage, type of data, queries etc. on two servers.
> Server A is configured with a RAID 5 on all the data disks (9 of =them),
> and Server B is configured with RAID 5 on the data disks (5 of them) =but
> transaction log and tempdb is separated to its own mirrored disks (4 =of
> them - 2 each).
> > Which SQL server will run faster in terms of query response time AND =write
> performance? Database in Server A or Server B?
> >|||I completely agree with Mike John's remark to drop RAID5 completely,
certainly for a 'heavy transaction environment' as you describe. Check out
this link for some reasons why
http://www.miracleas.dk/BAARF/BAARF2.html
(OK it's Oracle/Unix, but it is as tru for the SQLServer/Win world too..)
"Mike John" <Mike.John@.knowledgepool.com> wrote in message
news:uRds7fpjDHA.2676@.TK2MSFTNGP11.phx.gbl...
Assumimng disc io is the limiting factor then I would say B would be most
likely to outperform A. But even better would be lose the raid 5 and use 0+1
on 4 discs. or possibly on 6 and lose the separation of tempdb. Of course
this will lose you some space over the raid 5, but will perform better.
Hardware raid vs S/W will alo make a difference. How much separating tempdb
gives you totally depends on the number of queries generating large
temporary results.
Opinions are likely to vary a lot and you haven't given much info to go on.
Mike John
"Rick" <bob@.bob.net> wrote in message
news:3f858779$0$103$8f4e7992@.newsreader.goldengate.net...
> Assume a heavy transaction environment.
> Let's say we have the same database model,
> same hardware, file usage, type of data, queries etc. on two servers.
> Server A is configured with a RAID 5 on all the data disks (9 of them),
> and Server B is configured with RAID 5 on the data disks (5 of them) but
> transaction log and tempdb is separated to its own mirrored disks (4 of
> them - 2 each).
> Which SQL server will run faster in terms of query response time AND write
> performance? Database in Server A or Server B?
>|||On Thu, 9 Oct 2003 11:06:17 -0500, "Rick" <bob@.bob.net> wrote:
>Assume a heavy transaction environment.
> Let's say we have the same database model,
>same hardware, file usage, type of data, queries etc. on two servers.
>Server A is configured with a RAID 5 on all the data disks (9 of them),
>and Server B is configured with RAID 5 on the data disks (5 of them) but
>transaction log and tempdb is separated to its own mirrored disks (4 of
>them - 2 each).
>Which SQL server will run faster in terms of query response time AND write
>performance? Database in Server A or Server B?
Well, like the other guys said, one would probably anticipate B
running a bit faster, but it's also going to depend on how much data
is cached, that is, on RAM size, and just how heavily tempdb gets
used, and for that matter, how heavily the log gets used. If it were
100% simple queries, A might even win, for a more usual mix of 90%
simple queries, 5% complex queries, and 5% highly localized updates,
you're probably close to the break-even point, assuming you're not
running short of disk IO bandwidth or CPU in either case A or case B.
J.

Best Performance Strip setting RAID

We are running SQL6.5 and plan to upgrade to SQL2000. I'm wondering what's the best stripe size for the RAID 5 configuration. 8,16,32 or 64 kb.
The database is 90% used for read actions. Only during night complete refill of data and write actions only for statistics. Any advise is welcome on this subjectSince SQL server pages are 64K the raid stripe settings must also be set to 64K|||SQL 6.5 uses 2k pages and SQL 2000 uses 8k pages.

The stripe size of your RAID drives does NOT have to follow the page size, however I would use a RAID stripe size >= to my page size.

Best Performance Query

Hi:

I have the following query, can somebody help me?

SELECT
s.Id, s.Name
FROM
Switch s
INNER JOIN
SwitchTelephoneRange r ON s.Id = r.Id
WHERE
'1526858' BETWEEN FromTelephone AND ToTelephone

Where the '1526858' is a phone number.

My problem is, I want to run the above query for each record in :

select Telephone from PhoneDirectory

So, each telephone number in the second phone, would macth the ' ' in the first query.

How can I do so? Do I need a loop? a cursor? Can you help please?

Thanks

Sounds like you need a correlated sub-query. Seehere andhere for more help.|||

Maybe something like this?

SELECT
s.Id, s.Name
FROM
Switch s
INNER JOIN
SwitchTelephoneRange r ON s.Id = r.Id
INNER JOIN
PhoneDirectory pd ON pd.Telephone BETWEEN FromTelephone AND ToTelephone

|||You could prob' do it by self joining too|||

Thanks for all.

Darek, your way is the best.

How can I join on a table without specifying that a key = key ?

You just joined on a table to get the phone number from and said

"inner join ... ON telephon between"

Can you explain that to me please?

That was a great tip.

Thanks a lot.

|||

Thanks.

Everything afterON inJOIN clause is nothing more like simply conditions.

You can write something like that:

FROM a
inner join b.SomeVarChar LIKE a.SomeOtherVarChar

Previous query in ther way:

SELECT
s.Id, s.Name
FROM
Switch s
INNER JOIN
SwitchTelephoneRange r
INNER JOIN
PhoneDirectory pd
WHERE
s.Id = r.Id
AND pd.Telephone BETWEEN FromTelephone AND ToTelephone

Best performance for SQL2000

Hi
i must configure a RAID on external storage disk array for my SQL 2000
cluster ad have think this configuration :
1- DataFile on separate RAID5 LUN
2- LogFile on another separate RAID5 LUN
3- Quorum on another separate RAID1 LUN
This configugation is good for performace (datafile e logfile separated) and
security (Quorum on Mirror) !'!
Thanks in advanceMake the Log on a RAID 1 or a 10 and not a 5. Raid 5 has too many writes
for peak performance of logs.Put the extra disks into the data raid 5 or
make it a raid 10 for better performance.
--
Andrew J. Kelly SQL MVP
<io.com> wrote in message news:OfbXh0YoEHA.1800@.TK2MSFTNGP15.phx.gbl...
> Hi
> i must configure a RAID on external storage disk array for my SQL 2000
> cluster ad have think this configuration :
> 1- DataFile on separate RAID5 LUN
> 2- LogFile on another separate RAID5 LUN
> 3- Quorum on another separate RAID1 LUN
> This configugation is good for performace (datafile e logfile separated)
and
> security (Quorum on Mirror) !'!
> Thanks in advance
>|||Ok therefore :
datafile RAID5
logfile RAID1
quorum RAID1
it's ok ?
thanks
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uFE6MNaoEHA.324@.TK2MSFTNGP11.phx.gbl...
> Make the Log on a RAID 1 or a 10 and not a 5. Raid 5 has too many writes
> for peak performance of logs.Put the extra disks into the data raid 5 or
> make it a raid 10 for better performance.
> --
> Andrew J. Kelly SQL MVP
>
> <io.com> wrote in message news:OfbXh0YoEHA.1800@.TK2MSFTNGP15.phx.gbl...
> > Hi
> >
> > i must configure a RAID on external storage disk array for my SQL 2000
> > cluster ad have think this configuration :
> >
> > 1- DataFile on separate RAID5 LUN
> > 2- LogFile on another separate RAID5 LUN
> > 3- Quorum on another separate RAID1 LUN
> >
> > This configugation is good for performace (datafile e logfile separated)
> and
> > security (Quorum on Mirror) !'!
> >
> > Thanks in advance
> >
> >
>|||Yes
--
Andrew J. Kelly SQL MVP
<io.com> wrote in message news:ORYQO1aoEHA.1608@.TK2MSFTNGP15.phx.gbl...
> Ok therefore :
> datafile RAID5
> logfile RAID1
> quorum RAID1
> it's ok ?
> thanks
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uFE6MNaoEHA.324@.TK2MSFTNGP11.phx.gbl...
> > Make the Log on a RAID 1 or a 10 and not a 5. Raid 5 has too many
writes
> > for peak performance of logs.Put the extra disks into the data raid 5 or
> > make it a raid 10 for better performance.
> >
> > --
> > Andrew J. Kelly SQL MVP
> >
> >
> > <io.com> wrote in message news:OfbXh0YoEHA.1800@.TK2MSFTNGP15.phx.gbl...
> > > Hi
> > >
> > > i must configure a RAID on external storage disk array for my SQL 2000
> > > cluster ad have think this configuration :
> > >
> > > 1- DataFile on separate RAID5 LUN
> > > 2- LogFile on another separate RAID5 LUN
> > > 3- Quorum on another separate RAID1 LUN
> > >
> > > This configugation is good for performace (datafile e logfile
separated)
> > and
> > > security (Quorum on Mirror) !'!
> > >
> > > Thanks in advance
> > >
> > >
> >
> >
>|||Even better:
datafile RAID10
logfile RAID1
quorum RAID1
Regards
Mike
"io.com" wrote:
> Ok therefore :
> datafile RAID5
> logfile RAID1
> quorum RAID1
> it's ok ?
> thanks
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uFE6MNaoEHA.324@.TK2MSFTNGP11.phx.gbl...
> > Make the Log on a RAID 1 or a 10 and not a 5. Raid 5 has too many writes
> > for peak performance of logs.Put the extra disks into the data raid 5 or
> > make it a raid 10 for better performance.
> >
> > --
> > Andrew J. Kelly SQL MVP
> >
> >
> > <io.com> wrote in message news:OfbXh0YoEHA.1800@.TK2MSFTNGP15.phx.gbl...
> > > Hi
> > >
> > > i must configure a RAID on external storage disk array for my SQL 2000
> > > cluster ad have think this configuration :
> > >
> > > 1- DataFile on separate RAID5 LUN
> > > 2- LogFile on another separate RAID5 LUN
> > > 3- Quorum on another separate RAID1 LUN
> > >
> > > This configugation is good for performace (datafile e logfile separated)
> > and
> > > security (Quorum on Mirror) !'!
> > >
> > > Thanks in advance
> > >
> > >
> >
> >
>
>

Best Performance

I wanna know. I have 2 ways to copy data.
way I
INSERT into A
SELECT * from B
Way II
DECLARE CURSOR ... For
SELECT * from B
WHILE ... BEGIN
/* insert one by one to A
END
...
...
Which the best, way I or II? How much its speed comparison?I. is better. How much faster depends on many things; it could
be 2, 10, or 50 times as fast. Experiment and find out.
Steve Kass
Drew University
Bpk. Adi Wira Kusuma wrote:

>I wanna know. I have 2 ways to copy data.
>way I
>INSERT into A
>SELECT * from B
>Way II
>DECLARE CURSOR ... For
>SELECT * from B
>WHILE ... BEGIN
> /* insert one by one to A
>END
>...
>...
>Which the best, way I or II? How much its speed comparison?
>
>|||Hi
" I - Way" is best
rely on cursor if u dont find any other way to do things
(but there will be always another way to achieve things easily)
refer:http://www.sql-server-performance.com/cursors.asp
Regards
Magesh
"Bpk. Adi Wira Kusuma" wrote:

> I wanna know. I have 2 ways to copy data.
> way I
> INSERT into A
> SELECT * from B
> Way II
> DECLARE CURSOR ... For
> SELECT * from B
> WHILE ... BEGIN
> /* insert one by one to A
> END
> ...
> ...
> Which the best, way I or II? How much its speed comparison?
>
>|||Absolutely you should avoid using cursor
So What would I use a Cursor for?
Solution:
In Favour
Two reasons would be when you need to do something for each row in a
result set (usually execute a stored procedure), or when you must
process rows in a specific order.
The first case is probably most common in DBA/admin scripts, for
example to write a script which GRANTs execute permission on all stored
procs in a database
Cursors are mainly used to traverse reach row in the result of a query.
If u would like to check each row and perform a certain operation on
that, u use a cursor. For example u can check for hierarchical queries
in BOL
Most of the time you *can* conditionally perform an operation for each
row using a WHERE clause or using some combination of a WHERE clause
and CASE
expressions. Most of the time that is a better option than using a
cursor. The situations where you cannot do it with set-based DML
statements (SELECT, UPDATE, DELETE, INSERT) are usually to do with
iterative operations that cannot feasibly be defined declaratively -
for example the problem of expanding an adjacency list hierarchy to an
unknown depth. In those cases you may find that a cursor or client-side
code are the optimum solutions.
The other sensible application for a cursor is the - administrative and
management processes that need to execute non-data
manipulation code for each row in a set - for example performing
backups, loading files, sending emails. For data manipulation
operations, however, 99% of the time there are better solutions that
don't require cursors.
Cursors are a bad idea to use as they hold locks on tables for their
duration. They are also not necessary as you can use set rowcount 1 to
loop through records individually. or Use corelated Subquery
With Warm Regards
Jatinder Singh|||A cursor typically takes twice as long as a set-based select, and that's
only for reading. Inserting within the fetch loop will be several
magnitudes slower, because each separate insert forces all indexes to be
updated. A set-based insert allows SQL Server to optimize all of the index
updates.
"Bpk. Adi Wira Kusuma" <adi_wira_kusuma@.yahoo.com.sg> wrote in message
news:#23PK#LkFHA.1968@.TK2MSFTNGP14.phx.gbl...
> I wanna know. I have 2 ways to copy data.
> way I
> INSERT into A
> SELECT * from B
> Way II
> DECLARE CURSOR ... For
> SELECT * from B
> WHILE ... BEGIN
> /* insert one by one to A
> END
> ...
> ...
> Which the best, way I or II? How much its speed comparison?
>

Best Performance

We're looking to upgrade our SQL Server - We have two options - A dual
processor machine running Enterprise edition, or a quad processor machine
running standard edition.
The question I'd like to ask is which is likely to be the faster - Obviously
Enterprise edition will allow us to have more memory and should therefore
hopefully cut down the number of recompiles, but having four processors...
Help !!!
AndyFYI: There can be many reasons for a recompile. But you must be using
really many stored procedures and very little memory. SQL-Server will
make the tradeoff how to use the available memory and it knows a
recompilation is expensive. Because of that, there will be few
recompiles because of memory constraints.
Gert-Jan
Andy Reilly wrote:
quote:

> We're looking to upgrade our SQL Server - We have two options - A dual
> processor machine running Enterprise edition, or a quad processor machine
> running standard edition.
> The question I'd like to ask is which is likely to be the faster - Obvious
ly
> Enterprise edition will allow us to have more memory and should therefore
> hopefully cut down the number of recompiles, but having four processors...
.
> Help !!!
> Andy
|||the corrent answer to almost any performance queestion is 'it depends'. <g>
I don't have enough info to say with certainty... no one does...
but generally speaking... I would be SHOCKED if the 4X Standard box didn't
perform better than the 2X Enterprise. Enterprise can support more mem,
but... the Standard limits are pretty darn good. IF you can get adequte
performance on a 2X box... that almost implies to me that your app and data
set isn't bing enough to worry about going beyond 2G anyway...
Brian Moran
Principal Mentor
Solid Quality Learning
SQL Server MVP
http://www.solidqualitylearning.com
"Andy Reilly" <ReillyA@.ThomasNoSpam-Sanderson.co.uk> wrote in message
news:%23F4GHBC5DHA.2188@.TK2MSFTNGP10.phx.gbl...
quote:

> We're looking to upgrade our SQL Server - We have two options - A dual
> processor machine running Enterprise edition, or a quad processor machine
> running standard edition.
> The question I'd like to ask is which is likely to be the faster -

Obviously
quote:

> Enterprise edition will allow us to have more memory and should therefore
> hopefully cut down the number of recompiles, but having four

processors...
quote:

> Help !!!
> Andy
>

Best performance

What is the best thing to do to get the best performance ?
I have a database that contain 600 tables. Those table are reached by 10 di
fferents applications that we developped. I want to know if it's better for
performance to spread those 600 tables in 10 databases or leave those 600 t
ables in one database. Som
e tables are accessed by multiples applications but most of the tables are o
nly for one application.Cris,
This is a nearly impossible to answer correctly question. But...
1. Optimizer seems to be smartest when working within a single database.
Constraints are only within a database.
2. 600 tables is not that many. Some ERP systems have _well_ over 15,000
tables and run just fine.
3. If you have I/O problems, spreading the files (and each db has its own
files) across separate spindles can give you improvement. (But you have to
have a lot of activity before this really becomes worth doing.)
I would break up my tables or keep them together more by how I logically
view the systems and how I administer them, rather that use performance as
the first criteria.
Russell Fields
"Cris" <anonymous@.discussions.microsoft.com> wrote in message
news:3AB917FA-67DC-4C55-BD90-285A412C7F49@.microsoft.com...
> What is the best thing to do to get the best performance ?
> I have a database that contain 600 tables. Those table are reached by 10
differents applications that we developped. I want to know if it's better
for performance to spread those 600 tables in 10 databases or leave those
600 tables in one database. Some tables are accessed by multiples
applications but most of the tables are only for one application.
>|||Hi,
Incase if you have multiple disk controllers then you can create file
groups. SQL Server can improve the performance by controlling
the placement of data and indexes onto specific disk drives. This will
reduce the I/O and will give more performance.
One more option is Create 2 databases or go for 1 more server for reporting
(Incase you have quite a lot of report generation).
1. Make the reporting database sync using Transactional Replication or
Logshipping and keep read only.
This will reduce the query retrieval in the online datatabase and your OLTP
will be much faster.
Thanks
Hari
MCDBA
"Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
news:uxNEjYHCEHA.580@.TK2MSFTNGP11.phx.gbl...
> Cris,
> This is a nearly impossible to answer correctly question. But...
> 1. Optimizer seems to be smartest when working within a single database.
> Constraints are only within a database.
> 2. 600 tables is not that many. Some ERP systems have _well_ over 15,000
> tables and run just fine.
> 3. If you have I/O problems, spreading the files (and each db has its own
> files) across separate spindles can give you improvement. (But you have
to
> have a lot of activity before this really becomes worth doing.)
> I would break up my tables or keep them together more by how I logically
> view the systems and how I administer them, rather that use performance as
> the first criteria.
> Russell Fields
> "Cris" <anonymous@.discussions.microsoft.com> wrote in message
> news:3AB917FA-67DC-4C55-BD90-285A412C7F49@.microsoft.com...
10
> differents applications that we developped. I want to know if it's better
> for performance to spread those 600 tables in 10 databases or leave those
> 600 tables in one database. Some tables are accessed by multiples
> applications but most of the tables are only for one application.
>

Best Performance

We're looking to upgrade our SQL Server - We have two options - A dual
processor machine running Enterprise edition, or a quad processor machine
running standard edition.
The question I'd like to ask is which is likely to be the faster - Obviously
Enterprise edition will allow us to have more memory and should therefore
hopefully cut down the number of recompiles, but having four processors...
Help !!!
AndyEnterprise is not necessarily faster than standard.
Check the following for further details :-
http://www.microsoft.com/technet/treeview/default.asp?
url=/technet/prodtechnol/sql/evaluate/choose.asp
If you think about it for a moment the speed of any
application is going to as fast as the slowest component,
which is normally (in order of slowest) Network, Hard
Disk, then CPU.
So before getting your CPU's sorted you should have a look
at the other two.
If you would like further details then feel free to email
me at little_flowery_me<REMOVETHIS>.@.hotmail.com
In the meanwhile have a quick look at this...
http://www.tpc.org/
J
>--Original Message--
>We're looking to upgrade our SQL Server - We have two
options - A dual
>processor machine running Enterprise edition, or a quad
processor machine
>running standard edition.
>The question I'd like to ask is which is likely to be the
faster - Obviously
>Enterprise edition will allow us to have more memory and
should therefore
>hopefully cut down the number of recompiles, but having
four processors...
>Help !!!
>Andy
>
>.
>|||FYI: There can be many reasons for a recompile. But you must be using
really many stored procedures and very little memory. SQL-Server will
make the tradeoff how to use the available memory and it knows a
recompilation is expensive. Because of that, there will be few
recompiles because of memory constraints.
Gert-Jan
Andy Reilly wrote:
> We're looking to upgrade our SQL Server - We have two options - A dual
> processor machine running Enterprise edition, or a quad processor machine
> running standard edition.
> The question I'd like to ask is which is likely to be the faster - Obviously
> Enterprise edition will allow us to have more memory and should therefore
> hopefully cut down the number of recompiles, but having four processors...
> Help !!!
> Andy|||the corrent answer to almost any performance queestion is 'it depends'. <g>
I don't have enough info to say with certainty... no one does...
but generally speaking... I would be SHOCKED if the 4X Standard box didn't
perform better than the 2X Enterprise. Enterprise can support more mem,
but... the Standard limits are pretty darn good. IF you can get adequte
performance on a 2X box... that almost implies to me that your app and data
set isn't bing enough to worry about going beyond 2G anyway...
--
Brian Moran
Principal Mentor
Solid Quality Learning
SQL Server MVP
http://www.solidqualitylearning.com
"Andy Reilly" <ReillyA@.ThomasNoSpam-Sanderson.co.uk> wrote in message
news:%23F4GHBC5DHA.2188@.TK2MSFTNGP10.phx.gbl...
> We're looking to upgrade our SQL Server - We have two options - A dual
> processor machine running Enterprise edition, or a quad processor machine
> running standard edition.
> The question I'd like to ask is which is likely to be the faster -
Obviously
> Enterprise edition will allow us to have more memory and should therefore
> hopefully cut down the number of recompiles, but having four
processors...
> Help !!!
> Andy
>

Best Oracle source provider?

Hello,
I am using a massive Oracle db in SSIS and am trying to figure out the best way to increase performance in moving data from this db to sql server. As far as Oracle sources, which seem to have the best performance?
Thanks,
AdrianWhat did you mean by the 'Oracle sources'? We tried using OLEDB for oracle and the performance is really poor.|||I've had that experience as well with OLEDB. What I meant by sources is either the .NET OracleClient, Microsoft OLEDB provider for Oracle, or Oracle provider for OLEDB. I believe these are the only 3 options to connect to Oracle. I can't even use the Oracle provider for OLEDB without numerous errors, and have found that the other 2 are quite slow as well. So I was just wondering what other people's experiences were, and if they found 1 to be superior over another. And maybe some input from the SSIS developers as to what their suggestions are.
Thanks,
Adrian|||The OLEDB providers for Oracle that are currently available aren't terribly fast. Have you considered replicating (a subset of) your data to SQL Server from Oracle before (or at the beginning of) your package execution? This topic in BOL is probably where to start: ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/rpldata9/html/2e013259-0022-4897-a08d-5f8deb880fa8.htm|||Hello,

We have built a super fast connector for Oracle for SSIS. We have been able to achieve about 30 times performance improvements over the OLEDB Oracle provider. E.g 1 million rows, we loaded in about 2:20 mins. Let me know if you are interested in beta bits.

Regards
Sidharth
http://www.persistentsys.com
siddossy@.hotmail.com

Best Oracle source provider?

Hello,
I am using a massive Oracle db in SSIS and am trying to figure out the best way to increase performance in moving data from this db to sql server. As far as Oracle sources, which seem to have the best performance?
Thanks,
Adrian
What did you mean by the 'Oracle sources'? We tried using OLEDB for oracle and the performance is really poor.|||I've had that experience as well with OLEDB. What I meant by sources is either the .NET OracleClient, Microsoft OLEDB provider for Oracle, or Oracle provider for OLEDB. I believe these are the only 3 options to connect to Oracle. I can't even use the Oracle provider for OLEDB without numerous errors, and have found that the other 2 are quite slow as well. So I was just wondering what other people's experiences were, and if they found 1 to be superior over another. And maybe some input from the SSIS developers as to what their suggestions are.
Thanks,
Adrian
|||The OLEDB providers for Oracle that are currently available aren't terribly fast. Have you considered replicating (a subset of) your data to SQL Server from Oracle before (or at the beginning of) your package execution? This topic in BOL is probably where to start: ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/rpldata9/html/2e013259-0022-4897-a08d-5f8deb880fa8.htm|||Hello,

We have built a super fast connector for Oracle for SSIS. We have been able to achieve about 30 times performance improvements over the OLEDB Oracle provider. E.g 1 million rows, we loaded in about 2:20 mins. Let me know if you are interested in beta bits.

Regards
Sidharth
http://www.persistentsys.com
siddossy@.hotmail.com

Saturday, February 25, 2012

Best indexes to use

Hello,
I am currently working on some database tables that are suffering from
performance issues and would like some advice on which indexes are best
to use.
There exists a composite key on three fields out of 7 in the table and
these three fields are the ones normally used when searching the table
as well.
1) As most queries return only single answers, would it be better to
add three seperate indexes or a composite index?
2) Also, from my research, most people indicate using composite index
would be better - is this the case?
3) If I use a composite index, would it be used when searching using
only one or two of the composite fields?
Thanks in advance,
jjumblesale wrote:
> Hello,
> I am currently working on some database tables that are suffering from
> performance issues and would like some advice on which indexes are best
> to use.
> There exists a composite key on three fields out of 7 in the table and
> these three fields are the ones normally used when searching the table
> as well.
> 1) As most queries return only single answers, would it be better to
> add three seperate indexes or a composite index?
> 2) Also, from my research, most people indicate using composite index
> would be better - is this the case?
> 3) If I use a composite index, would it be used when searching using
> only one or two of the composite fields?
> Thanks in advance,
> j
>
You can look at the execution plan to see what indexes are being used by
your query. I'll try to give an overly simplified explanation:
Suppose I have a table with 7 columns, Col1 thru Col7.
I have an index named Index1 using Col1, Col2, Col3 as a composite key.
Running the following queries (data volumes/optimizer whims may alter
these behaviors):
-- This will do an index seek using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
Does that make sense? Now let's add another index, Index2, to the
table, using Col2, Col4 as a key:
-- This will now do an index seek using Index2
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index2, with a bookmark lookup to
get the value of Col1, not Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
These are ridiculously simple examples, and when run against real data
volumes, the optimizer may choose a different course of action,
depending on statistics, distribution of values, etc...
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Cheers for your reply - that's great in regards to the composite index
issue i'm having, but in these kinds of scenarios, would I be right in
thinking that a non-clustered index covering the three columns would be
of more use than a clustered index because I am using queries that
bring back small numbers (usually one) result?
j
> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the
> table, using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to
> get the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action,
> depending on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||jumblesale wrote:
> Cheers for your reply - that's great in regards to the composite index
> issue i'm having, but in these kinds of scenarios, would I be right in
> thinking that a non-clustered index covering the three columns would be
> of more use than a clustered index because I am using queries that
> bring back small numbers (usually one) result?
>
I'm going to have to used the canned response of "it depends" on this
one. *GENERALLY* clustered indexes perform best on range scans, BUT,
they can also be used by non-clustered indexes to avoid doing bookmark
lookups. Your best bet is to experiment a little, compare the execution
plan of your query when different indexes are available, see what
performs best given your data.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Tracy's given some nice samples here. I'll just chime in with a little extra
which might help you understand.
Ideally, you want the columns that are being filtered in the query + those
being returned in the select list in the index. If you have many different
queries that run against the table, you might need a few different indexes
to accommodate their various requirements. Don't be afraid of adding more
than one column to an index & also don't be afraid of adding multiple
indexes to a table.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C241.8090806@.realsqlguy.com...
> jumblesale wrote:
>> Hello,
>> I am currently working on some database tables that are suffering from
>> performance issues and would like some advice on which indexes are best
>> to use.
>> There exists a composite key on three fields out of 7 in the table and
>> these three fields are the ones normally used when searching the table
>> as well.
>> 1) As most queries return only single answers, would it be better to
>> add three seperate indexes or a composite index?
>> 2) Also, from my research, most people indicate using composite index
>> would be better - is this the case?
>> 3) If I use a composite index, would it be used when searching using
>> only one or two of the composite fields?
>> Thanks in advance,
>> j
> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the table,
> using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to get
> the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action, depending
> on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||ha! sorry for backing you into a corner there - good advice though,
i'll have to get working on the indexes and check the execution plans -
just wanted a bit of advise before I start playing about on the server.
Thanks for your ideas,
j
Tracy McKibben wrote:
> I'm going to have to used the canned response of "it depends" on this
> one. *GENERALLY* clustered indexes perform best on range scans, BUT,
> they can also be used by non-clustered indexes to avoid doing bookmark
> lookups. Your best bet is to experiment a little, compare the execution
> plan of your query when different indexes are available, see what
> performs best given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Hi Tracy
Re >*GENERALLY* clustered indexes perform best on range scans<
I'd love to hear your thoughts on this blog I did yesterday.
http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C596.30004@.realsqlguy.com...
> jumblesale wrote:
>> Cheers for your reply - that's great in regards to the composite index
>> issue i'm having, but in these kinds of scenarios, would I be right in
>> thinking that a non-clustered index covering the three columns would be
>> of more use than a clustered index because I am using queries that
>> bring back small numbers (usually one) result?
> I'm going to have to used the canned response of "it depends" on this one.
> *GENERALLY* clustered indexes perform best on range scans, BUT, they can
> also be used by non-clustered indexes to avoid doing bookmark lookups.
> Your best bet is to experiment a little, compare the execution plan of
> your query when different indexes are available, see what performs best
> given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Greg Linwood wrote:
> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx
>
I have to admit, I've never actually compared clustered vs.
non-clustered indexes in the detail that you have here. I'm guilty of
simply repeating what I've picked up from other sources. Your analysis
makes perfect sense to me, but I'm curious... Your example assumes
we're returning a subset of columns from every row in the table.
Suppose our table contains a million rows, clustered on a "SessionID"
column. Session ID's increment throughout the day. A typical session
contains 100-150 rows, and I want to return every column from the table
for the five sessions that occurred yesterday. Without having hard
numbers to back me up, it seems like it the clustered index wins in this
case. Or would you not consider this to be a "range" scan?
I think it's going to depend on how wide your query is and what sort of
data you're pulling back - can it be covered with a non-clustered index?
Either way, it's a well written article and it certainly has me thinking...
--
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Hi Greg,
Just a quick question - because I don't have a unique primary key and
require the use of a composite key on the table, would it potentially
cause a performance hit if I use composite NCIX only and have no CIX on
the table?
Cheers for your posts btw - very interesting blog article too - it
confirmed some tests I thought i'd done incorrectly in a sql refresher
course I attended last year!
j
Greg Linwood wrote:
> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
>|||jumblesale wrote:
> ha! sorry for backing you into a corner there - good advice though,
> i'll have to get working on the indexes and check the execution plans -
> just wanted a bit of advise before I start playing about on the server.
>
I spend most of my time sitting in the corner... :-)
There's just no way to give a definitive answer without having intimate
knowledge of the data involved, and being able to work with it
first-hand. Good luck!
Tracy McKibben
MCDBA
http://www.realsqlguy.com

Best filtering solution for performance?

Hello,

i have a report with approximately 20 reportparameters from queries. This is really slowing down the system, although the dataload (rows sent back) is not that huge. I read on the internet that there are basicly three approaches.

1, Query parameter with alot of diffrent datasources

2, Table filtering

3, Stored procedures

I wonder which approach is the best and why it takes such a long time to get the report with the report parameters (before generating the report itself).

Thank you for your help! Smile

I would think stored procedures would be the best approach. Have you tried using SQL profile to trace where the slowdown is?

cheers,

Andrew

|||

Hello,

thank you for you fast reponse! I now changed to Stored Procedures, i thin in long term it is better to use SP. But now i Have problems with multivalued parameters, which will be sent as an array to SQL Server. I found some articles about this matter an will try to solve it. The performance problem seems to be solved although I don't know why. Will look into that later. Thanks fpr the profiler tip! Best regards! Smile

Friday, February 24, 2012

Best Config

I have a windows 2K server using SQL 2K. I am trying to
indentify what RAID configuation I should use for the
best performance. The application is an OLTP variety. I
have 5 physical disks. What would be the best set-up
(performance is more important than redundancy). If you
need any more info please ask. TIA.
MarcusAlthough the number of concurrent users is a potential issue.
Mirror the log, raid 10 the data ( but you don't have enough disk for that)
so
Mirror the log, raid 0 the data (stripe with stripe size of 64K)
opinions will vary... but avoiding raid 5, will give you a write improvement
( R5 has at least a 75% write penalty.)
But this will not give you data redundancy, but you can get up to the minute
recovery without loss ofdata...
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and its
community of SQL Server professionals.
www.sqlpass.org
"Marcus" <marcus12@.hotmail.com> wrote in message
news:0d4a01c36268$9feebd10$a501280a@.phx.gbl...
> I have a windows 2K server using SQL 2K. I am trying to
> indentify what RAID configuation I should use for the
> best performance. The application is an OLTP variety. I
> have 5 physical disks. What would be the best set-up
> (performance is more important than redundancy). If you
> need any more info please ask. TIA.
> Marcus
>

Benefits/drawbacks with NVarChar(max)

Hi,

I wonder if there are any drawbacks with NVarChar(max) contra e.g. NVarChar(40) when it comes to performance and amount of storage used?

Are there other arguments to use e.g. NVarChar(40) than that you have more control of how long the strings are when you set the upper limit?

I'm using Sql Server 2005.

Tomsi

Hey Tomsi. Using any of the (max) datatypes will basically tell the server that the data in that column could possibly grow to 2gb if desired. To achieve this, the database engine will evaluate the size of the data being inserted/updated into the column and store it appropriately depending on the size. If the size of the data being stored will fit in-row with the rest of the data for that row (i.e. if it's less than 8k minus the size of the other columns in the row give or take some other considerations), then SQL Server will store the data in-row with the rest of the row data. If it is larger than that and can't be stored in-row with the rest of the row's data, then SQL Server will store a pointer in the row with the data that points to the location of the actual data elsewhere. During read/write operations for that row from now on, the engine will have to jump from the pointer to the data, get/update the data, then jump back to the pointer, so read/write time is slowed in this case.

You'll also see a slight additional overhead for determining if the data can fit in row or not, but that will be miniscule.

General recommendation would be that if you know the size of the data will never exceed 'x', and x is <= ~8000 bytes, then use nvarchar(x)...if the data will in some cases or always exceed that size, then use the (max) indicator...

You could see this article for a bit more info and go from there

http://msdn2.microsoft.com/en-us/library/ms178158.aspx

HTH

Sunday, February 19, 2012

Benefits of 64 bit

I understand that the 64 environment allows more addressable memory.
Are there any additional performance benefits other than possibly those
gained by increased memory addressability?
Assuming the same database running on a 32 bit environment and a 64 bit
environment and applications hitting both environments with the same
work load. Can the 64 bit environment have a higher throughput. For
this hypothetical case let us assume that advantages of having a bigger
proc cache does not help.
cheers
KenHere is an MS enumeration of the advantages:
http://www.microsoft.com/sql/techinfo/whitepapers/advantages-64bit-environment.mspx
> Are there any additional performance benefits other than possibly those
> gained by increased memory addressability?
We've seen a roughly 40% reduction in query times on a 64bit server vs.
its 32bit equivalent (sql2k5 on both). We're using Intel with 64bit
extensions rather than Itanium.
> Assuming the same database running on a 32 bit environment and a 64 bit
> environment and applications hitting both environments with the same
> work load. Can the 64 bit environment have a higher throughput.
This does seem to be the case, though you still have to consider disk
IO limitations (assuming you're not storing your entire database in the
increased RAM on your 64bit server).
Although the performance increases are notable, and MS claims the 64bit
version is fully functional and supported, we have had a few trouble
areas with the 64bit version. SSIS seems quirky, along with
OPENDATASOURCE between 64bit and 32bit servers (not that you'd
necessarily want to use that a lot :))
All-in-all, we're not disappointed.
Have fun!
KenJ|||Proc Cache isn't the main consumer of memory which benefits being on 64 bit.
The main beneficiary is the increased data caching capability & this is by
far the most significant single reason for upgrading to 64 bit, at least
from a performance perspective.
Regards,
Greg Linwood
SQL Server MVP
<raidken@.yahoo.com> wrote in message
news:1156990437.923057.219500@.i42g2000cwa.googlegroups.com...
>I understand that the 64 environment allows more addressable memory.
> Are there any additional performance benefits other than possibly those
> gained by increased memory addressability?
> Assuming the same database running on a 32 bit environment and a 64 bit
> environment and applications hitting both environments with the same
> work load. Can the 64 bit environment have a higher throughput. For
> this hypothetical case let us assume that advantages of having a bigger
> proc cache does not help.
> cheers
> Ken
>|||I agree that the main beneficiary of increased memory is the buffer
cache. Even with 32 bit a large amounts of memory could be addresses
through AWE. Which brings up another question, does 64 bit give a
substantial performance boost by eliminating the need for AWE?
KenJ, you mentioned that query performance increased by 40%, was your
32 bit database memory bound? Do u attribute the improvement to higher
buffer hit rate, reduced IO queue length? I would be very interested in
discussing this further if you are willing.
cheers,
ken|||It appears that you are expecting to gain performance going to 64 bit. That
may very well be the case, but it may just turn out not to be case. The
outcome really depends on your workloads. I have seen the same app coming out
with lower throughput on 64 bit than on 32 bit with the same hardware. You
need to test your app to be sure.
Linchi
"raidken@.yahoo.com" wrote:
> I understand that the 64 environment allows more addressable memory.
> Are there any additional performance benefits other than possibly those
> gained by increased memory addressability?
> Assuming the same database running on a 32 bit environment and a 64 bit
> environment and applications hitting both environments with the same
> work load. Can the 64 bit environment have a higher throughput. For
> this hypothetical case let us assume that advantages of having a bigger
> proc cache does not help.
> cheers
> Ken
>|||using 64b instead of AWE provide a small boost.
if you take a look at the TPC.org web site, you'll see some benchmarks.
a 4CPU server provides 206000tpmC using SQL2005 x64 (128Gb of RAM) and
188000 tpmc with SQL 2000 (64Gb of RAM)
so with having 2times the memory on the server provides only a few advantage
and the x64 version is not so helpfull in this case.
in fact, the difference is also at the disk level, to compensate the lack of
memory the SQL 2000 benchmark use 2 time more disks (total of 30TB versus
15TB)
The advantage for small and mid sizebusiness is the ability to share the
server with multiple applications and let SQL Server to manage more memory
without locking this memory.
the AWE option lock the memory and this memory is not available for other
applications, its good for high performance and dedicated servers. But in a
real world 1 server support more then 1 application (the company install RS,
SQL and AS on the same server to reduce the license cost) locking the memory
is not good. In these conditions the x64bits version is really good, all the
server memory is used but not locked and regarding which application is more
on demand then the server balance the memory usage between them.
But for dedicated servers like intensive transactionnal systems the
advantage is small.
but the big changes are in AS2005 and SSIS where the x64bit platform provide
a huge advantage.
and to finish, because there is no difference in the price and for future
compatibility, use x64 version. There is no disadvantage of the x64 platform
versus the x32, so if you have the choice, use it!!!
also 32bits CPUs will quickly disappeared from the market with the price war
between Intel and AMD.
<raidken@.yahoo.com> wrote in message
news:1156995016.052764.299640@.i42g2000cwa.googlegroups.com...
>I agree that the main beneficiary of increased memory is the buffer
> cache. Even with 32 bit a large amounts of memory could be addresses
> through AWE. Which brings up another question, does 64 bit give a
> substantial performance boost by eliminating the need for AWE?
> KenJ, you mentioned that query performance increased by 40%, was your
> 32 bit database memory bound? Do u attribute the improvement to higher
> buffer hit rate, reduced IO queue length? I would be very interested in
> discussing this further if you are willing.
> cheers,
> ken
>|||There is some degree of overhead associated with AWE infrastructure, but
it's a relatively small part of the picture.
There's also more to compare than simply whether you can address 32Gb via 32
bit AWE or via 64 bit without AWE. For example, the 64 bit version of SQL
Server 2005 Standard Edition 2005 is limited to 32Gb RAM (on Win 2003 EE 64
bit), whilst the 32 bit version of SQL Server 2005 Standard Edition can't
get anywhere near that amount (I'm not sure what the actual amount is, but I
think it might be 4Gb)
Regards,
Greg Linwood
SQL Server MVP
<raidken@.yahoo.com> wrote in message
news:1156995016.052764.299640@.i42g2000cwa.googlegroups.com...
>I agree that the main beneficiary of increased memory is the buffer
> cache. Even with 32 bit a large amounts of memory could be addresses
> through AWE. Which brings up another question, does 64 bit give a
> substantial performance boost by eliminating the need for AWE?
> KenJ, you mentioned that query performance increased by 40%, was your
> 32 bit database memory bound? Do u attribute the improvement to higher
> buffer hit rate, reduced IO queue length? I would be very interested in
> discussing this further if you are willing.
> cheers,
> ken
>|||Hi Linchi
Any chance you can share what type of workload you've identified that
exhibits these characteristics & how this has been measured?
Regards,
Greg Linwood
SQL Server MVP
"Linchi Shea" <LinchiShea@.discussions.microsoft.com> wrote in message
news:173E5682-2471-4A6F-9DD7-33125DCEB413@.microsoft.com...
> It appears that you are expecting to gain performance going to 64 bit.
> That
> may very well be the case, but it may just turn out not to be case. The
> outcome really depends on your workloads. I have seen the same app coming
> out
> with lower throughput on 64 bit than on 32 bit with the same hardware. You
> need to test your app to be sure.
> Linchi
> "raidken@.yahoo.com" wrote:
>> I understand that the 64 environment allows more addressable memory.
>> Are there any additional performance benefits other than possibly those
>> gained by increased memory addressability?
>> Assuming the same database running on a 32 bit environment and a 64 bit
>> environment and applications hitting both environments with the same
>> work load. Can the 64 bit environment have a higher throughput. For
>> this hypothetical case let us assume that advantages of having a bigger
>> proc cache does not help.
>> cheers
>> Ken
>>|||raidken@.yahoo.com wrote:
> I understand that the 64 environment allows more addressable memory.
> Are there any additional performance benefits other than possibly those
> gained by increased memory addressability?
Putting aside the benefits for a moment, I'd strongly advise checking
for 64-bit availability of components where necessary. I recently got
dragged in to the later stages of a project where this hadn't been done
and had real trouble getting a 64-bit SQL 2005 installation to link
through to an old Informix box. In the end they had to roll it back to
32-bit.|||Greg;
I can't share it publicly. But if you drop me an email.
Linchi
"Greg Linwood" wrote:
> Hi Linchi
> Any chance you can share what type of workload you've identified that
> exhibits these characteristics & how this has been measured?
> Regards,
> Greg Linwood
> SQL Server MVP
> "Linchi Shea" <LinchiShea@.discussions.microsoft.com> wrote in message
> news:173E5682-2471-4A6F-9DD7-33125DCEB413@.microsoft.com...
> > It appears that you are expecting to gain performance going to 64 bit.
> > That
> > may very well be the case, but it may just turn out not to be case. The
> > outcome really depends on your workloads. I have seen the same app coming
> > out
> > with lower throughput on 64 bit than on 32 bit with the same hardware. You
> > need to test your app to be sure.
> >
> > Linchi
> >
> > "raidken@.yahoo.com" wrote:
> >
> >> I understand that the 64 environment allows more addressable memory.
> >> Are there any additional performance benefits other than possibly those
> >> gained by increased memory addressability?
> >>
> >> Assuming the same database running on a 32 bit environment and a 64 bit
> >> environment and applications hitting both environments with the same
> >> work load. Can the 64 bit environment have a higher throughput. For
> >> this hypothetical case let us assume that advantages of having a bigger
> >> proc cache does not help.
> >>
> >> cheers
> >>
> >> Ken
> >>
> >>
>
>

Benefits of 64 bit

I understand that the 64 environment allows more addressable memory.
Are there any additional performance benefits other than possibly those
gained by increased memory addressability?
Assuming the same database running on a 32 bit environment and a 64 bit
environment and applications hitting both environments with the same
work load. Can the 64 bit environment have a higher throughput. For
this hypothetical case let us assume that advantages of having a bigger
proc cache does not help.
cheers
KenHere is an MS enumeration of the advantages:
http://www.microsoft.com/sql/techin...nt.msp
x

> Are there any additional performance benefits other than possibly those
> gained by increased memory addressability?
We've seen a roughly 40% reduction in query times on a 64bit server vs.
its 32bit equivalent (sql2k5 on both). We're using Intel with 64bit
extensions rather than Itanium.

> Assuming the same database running on a 32 bit environment and a 64 bit
> environment and applications hitting both environments with the same
> work load. Can the 64 bit environment have a higher throughput.
This does seem to be the case, though you still have to consider disk
IO limitations (assuming you're not storing your entire database in the
increased RAM on your 64bit server).
Although the performance increases are notable, and MS claims the 64bit
version is fully functional and supported, we have had a few trouble
areas with the 64bit version. SSIS seems quirky, along with
OPENDATASOURCE between 64bit and 32bit servers (not that you'd
necessarily want to use that a lot )
All-in-all, we're not disappointed.
Have fun!
KenJ|||Proc Cache isn't the main consumer of memory which benefits being on 64 bit.
The main beneficiary is the increased data caching capability & this is by
far the most significant single reason for upgrading to 64 bit, at least
from a performance perspective.
Regards,
Greg Linwood
SQL Server MVP
<raidken@.yahoo.com> wrote in message
news:1156990437.923057.219500@.i42g2000cwa.googlegroups.com...
>I understand that the 64 environment allows more addressable memory.
> Are there any additional performance benefits other than possibly those
> gained by increased memory addressability?
> Assuming the same database running on a 32 bit environment and a 64 bit
> environment and applications hitting both environments with the same
> work load. Can the 64 bit environment have a higher throughput. For
> this hypothetical case let us assume that advantages of having a bigger
> proc cache does not help.
> cheers
> Ken
>|||I agree that the main beneficiary of increased memory is the buffer
cache. Even with 32 bit a large amounts of memory could be addresses
through AWE. Which brings up another question, does 64 bit give a
substantial performance boost by eliminating the need for AWE?
KenJ, you mentioned that query performance increased by 40%, was your
32 bit database memory bound? Do u attribute the improvement to higher
buffer hit rate, reduced IO queue length? I would be very interested in
discussing this further if you are willing.
cheers,
ken|||It appears that you are expecting to gain performance going to 64 bit. That
may very well be the case, but it may just turn out not to be case. The
outcome really depends on your workloads. I have seen the same app coming ou
t
with lower throughput on 64 bit than on 32 bit with the same hardware. You
need to test your app to be sure.
Linchi
"raidken@.yahoo.com" wrote:

> I understand that the 64 environment allows more addressable memory.
> Are there any additional performance benefits other than possibly those
> gained by increased memory addressability?
> Assuming the same database running on a 32 bit environment and a 64 bit
> environment and applications hitting both environments with the same
> work load. Can the 64 bit environment have a higher throughput. For
> this hypothetical case let us assume that advantages of having a bigger
> proc cache does not help.
> cheers
> Ken
>|||using 64b instead of AWE provide a small boost.
if you take a look at the TPC.org web site, you'll see some benchmarks.
a 4CPU server provides 206000tpmC using SQL2005 x64 (128Gb of RAM) and
188000 tpmc with SQL 2000 (64Gb of RAM)
so with having 2times the memory on the server provides only a few advantage
and the x64 version is not so helpfull in this case.
in fact, the difference is also at the disk level, to compensate the lack of
memory the SQL 2000 benchmark use 2 time more disks (total of 30TB versus
15TB)
The advantage for small and mid sizebusiness is the ability to share the
server with multiple applications and let SQL Server to manage more memory
without locking this memory.
the AWE option lock the memory and this memory is not available for other
applications, its good for high performance and dedicated servers. But in a
real world 1 server support more then 1 application (the company install RS,
SQL and AS on the same server to reduce the license cost) locking the memory
is not good. In these conditions the x64bits version is really good, all the
server memory is used but not locked and regarding which application is more
on demand then the server balance the memory usage between them.
But for dedicated servers like intensive transactionnal systems the
advantage is small.
but the big changes are in AS2005 and SSIS where the x64bit platform provide
a huge advantage.
and to finish, because there is no difference in the price and for future
compatibility, use x64 version. There is no disadvantage of the x64 platform
versus the x32, so if you have the choice, use it!!!
also 32bits CPUs will quickly disappeared from the market with the price war
between Intel and AMD.
<raidken@.yahoo.com> wrote in message
news:1156995016.052764.299640@.i42g2000cwa.googlegroups.com...
>I agree that the main beneficiary of increased memory is the buffer
> cache. Even with 32 bit a large amounts of memory could be addresses
> through AWE. Which brings up another question, does 64 bit give a
> substantial performance boost by eliminating the need for AWE?
> KenJ, you mentioned that query performance increased by 40%, was your
> 32 bit database memory bound? Do u attribute the improvement to higher
> buffer hit rate, reduced IO queue length? I would be very interested in
> discussing this further if you are willing.
> cheers,
> ken
>|||There is some degree of overhead associated with AWE infrastructure, but
it's a relatively small part of the picture.
There's also more to compare than simply whether you can address 32Gb via 32
bit AWE or via 64 bit without AWE. For example, the 64 bit version of SQL
Server 2005 Standard Edition 2005 is limited to 32Gb RAM (on Win 2003 EE 64
bit), whilst the 32 bit version of SQL Server 2005 Standard Edition can't
get anywhere near that amount (I'm not sure what the actual amount is, but I
think it might be 4Gb)
Regards,
Greg Linwood
SQL Server MVP
<raidken@.yahoo.com> wrote in message
news:1156995016.052764.299640@.i42g2000cwa.googlegroups.com...
>I agree that the main beneficiary of increased memory is the buffer
> cache. Even with 32 bit a large amounts of memory could be addresses
> through AWE. Which brings up another question, does 64 bit give a
> substantial performance boost by eliminating the need for AWE?
> KenJ, you mentioned that query performance increased by 40%, was your
> 32 bit database memory bound? Do u attribute the improvement to higher
> buffer hit rate, reduced IO queue length? I would be very interested in
> discussing this further if you are willing.
> cheers,
> ken
>|||Hi Linchi
Any chance you can share what type of workload you've identified that
exhibits these characteristics & how this has been measured?
Regards,
Greg Linwood
SQL Server MVP
"Linchi Shea" <LinchiShea@.discussions.microsoft.com> wrote in message
news:173E5682-2471-4A6F-9DD7-33125DCEB413@.microsoft.com...[vbcol=seagreen]
> It appears that you are expecting to gain performance going to 64 bit.
> That
> may very well be the case, but it may just turn out not to be case. The
> outcome really depends on your workloads. I have seen the same app coming
> out
> with lower throughput on 64 bit than on 32 bit with the same hardware. You
> need to test your app to be sure.
> Linchi
> "raidken@.yahoo.com" wrote:
>|||raidken@.yahoo.com wrote:

> I understand that the 64 environment allows more addressable memory.
> Are there any additional performance benefits other than possibly those
> gained by increased memory addressability?
Putting aside the benefits for a moment, I'd strongly advise checking
for 64-bit availability of components where necessary. I recently got
dragged in to the later stages of a project where this hadn't been done
and had real trouble getting a 64-bit SQL 2005 installation to link
through to an old Informix box. In the end they had to roll it back to
32-bit.|||Greg;
I can't share it publicly. But if you drop me an email.
Linchi
"Greg Linwood" wrote:

> Hi Linchi
> Any chance you can share what type of workload you've identified that
> exhibits these characteristics & how this has been measured?
> Regards,
> Greg Linwood
> SQL Server MVP
> "Linchi Shea" <LinchiShea@.discussions.microsoft.com> wrote in message
> news:173E5682-2471-4A6F-9DD7-33125DCEB413@.microsoft.com...
>
>