Saturday, February 25, 2012

Best license program

I have a small company and i make online ERP software in ASP.NET for small
companies. Companies with one server en a maximum of 10 concurrent users.
What SQL server license should i advice for these companies? And how much
does it costs?
I've tried to understand the documents on the microsoft site but i did not
find an satisfying answer.
Thanks in advance.
Ronald
If your database won't be bigger than 2 GB, you can use MSDE which is free.
Otherwise the best is to use Standard Edition with per seat licensing if
your client don't have more than 25 users (not concurrent users, you need a
license for everyone who uses the server), or Standard Edition with a
processor license if there are more than 25 users.
Jacco Schalkwijk
SQL Server MVP
"Sandeman" <ilighters@.zeelandnet.nl> wrote in message
news:OAiA2bYlEHA.3452@.TK2MSFTNGP15.phx.gbl...
>I have a small company and i make online ERP software in ASP.NET for small
> companies. Companies with one server en a maximum of 10 concurrent users.
> What SQL server license should i advice for these companies? And how much
> does it costs?
> I've tried to understand the documents on the microsoft site but i did not
> find an satisfying answer.
> Thanks in advance.
> Ronald
>

best jdbc 1.0 centric driver for sql server

Hello Joe

Over the past several years, I have found your responses to jdbc
usage/driver related issues to be extremely helpful. I am sure that
you're very busy so I will make my question as brief as possible - the
reason I didn't post this to a newsgroup is that I don't want to hear
any more hype or links to benchmarks (since the vast majority of the
benchmarks don't test concurent/mixed-transaction environments). (I am
also posting a modified version of this message to some newsgroups, but
based on past responses - don't have too much hope that these will lead
to a resolution.) If the only way to answer this question would be to
have a phone conversation, my company would be more than happy to pay a
consulting-service fee for your time.

My situation:
My company has a enterprise level web-app that targets SQL Server. We
don't use J2EE - so we stick to a simple to administer web containers
(JRun,Tomcat). Recently, we have been getting worse and worse
performance from our jdbc driver (we use the free MS jdbc driver) -
things like strange transaction resource handling, chopy overall
performance, etc.. I have spent significant time tweaking it (following
various advice - many times yours - on newgroups... I can go into
details, but I don't want to take up your time). As the project's
architect, I need to do something about this problem - but various
high-level tunnings that I have done to the way we use connections with
this driver haven't significantly improved perfomance under normal
everyday load. (Our queries often span tables with millions of records,
and are relatively dynamic. We use a seperate pool of autocommit-off
connections for writes, and autocommit-on connecions for reads)

So, then, my question - in your expert opinion - what is the BEST
driver from SQL Server 2000 given the following needs:

Things I need it to have are solid jdbc 1.0, solid transaction
handling, decent concurrent load handling, stable implementation of
resource handling (i.e. auto closing result sets when parent statements
close, closing of result sets & statements when connections close) and
support for multiple open statements/result sets per connection.

Things I do not have a direct need for:
Connection Pooling (I keep my own pool of open connections), DataSource
support, distributed transactions or 2-phase commit support, RowSet
variants, jdbc 3.0 autokey generation, J2EE compliance. We do make use
of updatable result sets - but I don't care if it isn't supported since
I can still use MS driver for those, as they are mainly done it batch
jobs).

Things that are nice:
It would also be good to have a driver that can handle prep statement
caching on a driver level (vs connection level) although this isn't a
necessity, and if it is available - it would need to have a
configurable caching strategy (or have a way to be turned off :)

I very much appreciate your help in advance
Thanks
Gary Bogachkov
System Architect
Stericycle Direct ReturnGary,

I know this is exactly what you were _not_ looking for, but I have some
marketing garbage coming your way. ;o)

First of all, a disclaimer: I'm a developer of the jTDS JDBC driver, so
at least in theory I can't be totally impartial.

Although jTDS is a JDBC 3.0 implementation with all the implied bells
and whistles its JDBC 1.0 functionality is quite solid. If you'll check
the jTDS Help forum on SourceForge you'll see that most of the requests
we had since the release of 0.9.1 were either caused by the user's lack
of knowledge or bugs in earlier versions. It really sounds like sales
speak, I apologize. jTDS actually has all the features you require,
plus the updateable ResultSets (which are paged, thus much faster than
with the MS driver).

To the benchmarks now: we have a very old benchmark result on our site
( http://jtds.sourceforge.net ), comparing jTDS to some of the other
JDBC drivers for SQL Server. Some companies such as JNetDirect and
DataDirect do not allow benchmark results of their drivers to be made
public, so they are not in that benchmark. If you want, I can provide
you with some more recent results with pretty much all of the available
drivers via email or even better, with some instructions on how to run
the benchmarks yourself.

What I noticed from the benchmarks: jTDS and the DataDirect drivers
were the only drivers that passed all tests (although they were
supposed to be performance tests, the other drivers failed some tests).
The DataDirect drivers are the direct ancestor of the MS driver and
still have the same selectMode=cursor/direct implementation, which is
really counterproductive. In cursor mode (the only one that supports
transactions, as you probably know) _all_ selects create server-side
cursors, using up a lot of resources on the server. The MS driver, as
well as some of the other DataDirect-derived drivers (there are a lot
of them) are only able to fetch one row per request, and that means a
huge number of roundtrips only to retrieve a ResultSet; the DataDirect
driver seems to be able to fetch rows in pages, however, greatly
improving performance; it's still slower than a direct select, which
jTDS uses, but it's ok. The JNetDirect and i-net Software drivers
(which would be the ones I would recommend from the whole bunch) are
close to jTDS in performance but they lack a number of functionalities
(that might not affect you, however).

The bottom line is this: the most stable driver seems to be the
DataDirect one; however it's quite slow, around the same areas as the
MS driver; their stability comes from the intensive testing they do and
the fact that they are probably the most widely used of all drivers.
The JNetDirect and i-net Software drivers are both fast and probably
have all the features you need, but they cost a lot of bucks ($1700 for
the JNetDirect driver seems a lot for 200kb of code). According to my
(notice, "my") testing jTDS is the fastest of the bunch and seems to be
more stable (although not on par with the DataDirect driver).

jTDS is also the recommended choice for SQL Server by a number of open
source projects and companies: Hibernate, JBoss, Atlassian Confluence,
Db-Visualizer are only a few of these. Also, for the moment we're not
making any money from jTDS, so except for our own pride there's nothing
to push us to make statements we can't back up with facts.

I apologize if this is not what you were looking for, but I honestly
hope that's not the case.

Regards,
Alin,
The jTDS Project.

best install scenario

In the current MSDN Universal disk set, there is a disk for Visual Studio
2005 (beta2) and a disk for SQL Server 2005. If you install VS2005 you get
the April CTP SQL server but without Server Management Studio. If you
install SQL Server 2005, you get VS2005 but without the normal code
development templates. What's the recommended install sequence to get both?
And after several install/uninstall sequences I seem to have lost start menu
links to VS.Net 2003. Can I have VS.Net 2003 and VS.Net 2005 on the same
machine?
tks
RonHi Ron,
Since SQL Server 2005 has not been public released yet, we will redirect
all SQL Server 2005 posts to the newsgroup below
http://communities.microsoft.com/newsgroups/default.asp?icp=sqlserver2005&sl
cid=us
Thanks so much for your understanding.
Sincerely yours,
Michael Cheng
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
=====================================================This posting is provided "AS IS" with no warranties, and confers no rights.

best install scenario

In the current MSDN Universal disk set, there is a disk for Visual Studio
2005 (beta2) and a disk for SQL Server 2005. If you install VS2005 you get
the April CTP SQL server but without Server Management Studio. If you
install SQL Server 2005, you get VS2005 but without the normal code
development templates. What's the recommended install sequence to get both?
And after several install/uninstall sequences I seem to have lost start menu
links to VS.Net 2003. Can I have VS.Net 2003 and VS.Net 2005 on the same
machine?
tks
RonHi Ron,
Since SQL Server 2005 has not been public released yet, we will redirect
all SQL Server 2005 posts to the newsgroup below
http://communities.microsoft.com/ne...qlserver2005&sl
cid=us
Thanks so much for your understanding.
Sincerely yours,
Michael Cheng
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
========================================
=============
This posting is provided "AS IS" with no warranties, and confers no rights.

best install scenario

In the current MSDN Universal disk set, there is a disk for Visual Studio
2005 (beta2) and a disk for SQL Server 2005. If you install VS2005 you get
the April CTP SQL server but without Server Management Studio. If you
install SQL Server 2005, you get VS2005 but without the normal code
development templates. What's the recommended install sequence to get both?
And after several install/uninstall sequences I seem to have lost start menu
links to VS.Net 2003. Can I have VS.Net 2003 and VS.Net 2005 on the same
machine?
tks
Ron
Hi Ron,
Since SQL Server 2005 has not been public released yet, we will redirect
all SQL Server 2005 posts to the newsgroup below
http://communities.microsoft.com/new...lserver2005&sl
cid=us
Thanks so much for your understanding.
Sincerely yours,
Michael Cheng
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.

Best indexing option?

Hi,
I have to copy some data from one table to another. Its billing
information from an invoice to a Customer table.
The requirement is that if ANY billing information for a customer
changes, a new Customer record must be created. Currently I have this
query to do that.
SELECTptc.CustomerId, dc.DocumentContactId
FROMdts.Customer AS ptc
INNER JOIN dbo.vDocumentContactDetail AS dc ON
ptc.CustomerName = dc.CompanyName
AND ptc.Line1 = dc.Line1
AND ptc.Line2 = dc.Line2
AND ptc.City = dc.City
AND ptc.State = dc.StateName
AND ptc.Country = dc.CountryName
AND ptc.Email = dc.Email
AND ptc.PhoneNumber = dc.PhoneNumber
AND ptc.FaxNumber = dc.FaxNumber
AND dc.FirstName + ' ' + dc.LastName = ptc.ContactName
This query is used by another view to get the customer id (which is
generated later and stored in the Customer table.
I noticed in the execution plan that there is a hash which is includes
CompanyName, dc.Email, and I believe the last part of the join, the
ContactName, although I'm not sure because the Execution plan
includings [Exp1004] = [Exp1005].
Any other suggestions to make this more performant? Or am I limited?
I don't quite understand the requirement. From looking at the SELECT
it appears that it looks for a perfect match on everything EXCEPT the
keys, and returns the two keys. I can't say that I've ever seen
anything like that before except when merging two unrelated data
sources or resolving data redundancy problems.
Making a wild guess, I am wondering if the main thing done with
ptc.CustomerId and dc.DocumentContactId when they are returned is
compare them to see if they are the same so that some action (new
Customer record) can be taken if they are not. If so the entire
process appears to me to be inside out.
What I usually see, and write, when I want to check for changes:
SELECT ptc.CustomerId, dc.DocumentContactId
FROM dts.Customer AS ptc
JOIN dbo.vDocumentContactDetail AS dc
ON ptc.CustomerId = dc.DocumentContactId
WHERE ptc.CustomerName <> dc.CompanyName
OR ptc.Line1 <> dc.Line1
OR ptc.Line2 <> dc.Line2
OR ptc.City <> dc.City
OR ptc.State <> dc.StateName
OR ptc.Country <> dc.CountryName
OR ptc.Email <> dc.Email
OR ptc.PhoneNumber <> dc.PhoneNumber
OR ptc.FaxNumber <> dc.FaxNumber
OR dc.FirstName + ' ' + dc.LastName <> ptc.ContactName
That returns the keys that have differences. I don't know if that is
what you need, but maybe it is.
Roy Harvey
Beacon Falls, CT
On Tue, 13 Nov 2007 06:35:23 -0800, Andy <andyj@.med-associates.com>
wrote:

>Hi,
>I have to copy some data from one table to another. Its billing
>information from an invoice to a Customer table.
>The requirement is that if ANY billing information for a customer
>changes, a new Customer record must be created. Currently I have this
>query to do that.
>SELECTptc.CustomerId, dc.DocumentContactId
>FROMdts.Customer AS ptc
>INNER JOIN dbo.vDocumentContactDetail AS dc ON
> ptc.CustomerName = dc.CompanyName
>AND ptc.Line1 = dc.Line1
>AND ptc.Line2 = dc.Line2
>AND ptc.City = dc.City
>AND ptc.State = dc.StateName
>AND ptc.Country = dc.CountryName
>AND ptc.Email = dc.Email
>AND ptc.PhoneNumber = dc.PhoneNumber
>AND ptc.FaxNumber = dc.FaxNumber
>AND dc.FirstName + ' ' + dc.LastName = ptc.ContactName
>This query is used by another view to get the customer id (which is
>generated later and stored in the Customer table.
>I noticed in the execution plan that there is a hash which is includes
>CompanyName, dc.Email, and I believe the last part of the join, the
>ContactName, although I'm not sure because the Execution plan
>includings [Exp1004] = [Exp1005].
>Any other suggestions to make this more performant? Or am I limited?

Best indexing option?

Hi,
I have to copy some data from one table to another. Its billing
information from an invoice to a Customer table.
The requirement is that if ANY billing information for a customer
changes, a new Customer record must be created. Currently I have this
query to do that.
SELECT ptc.CustomerId, dc.DocumentContactId
FROM dts.Customer AS ptc
INNER JOIN dbo.vDocumentContactDetail AS dc ON
ptc.CustomerName = dc.CompanyName
AND ptc.Line1 = dc.Line1
AND ptc.Line2 = dc.Line2
AND ptc.City = dc.City
AND ptc.State = dc.StateName
AND ptc.Country = dc.CountryName
AND ptc.Email = dc.Email
AND ptc.PhoneNumber = dc.PhoneNumber
AND ptc.FaxNumber = dc.FaxNumber
AND dc.FirstName + ' ' + dc.LastName = ptc.ContactName
This query is used by another view to get the customer id (which is
generated later and stored in the Customer table.
I noticed in the execution plan that there is a hash which is includes
CompanyName, dc.Email, and I believe the last part of the join, the
ContactName, although I'm not sure because the Execution plan
includings [Exp1004] = [Exp1005].
Any other suggestions to make this more performant? Or am I limited?I don't quite understand the requirement. From looking at the SELECT
it appears that it looks for a perfect match on everything EXCEPT the
keys, and returns the two keys. I can't say that I've ever seen
anything like that before except when merging two unrelated data
sources or resolving data redundancy problems.
Making a wild guess, I am wondering if the main thing done with
ptc.CustomerId and dc.DocumentContactId when they are returned is
compare them to see if they are the same so that some action (new
Customer record) can be taken if they are not. If so the entire
process appears to me to be inside out.
What I usually see, and write, when I want to check for changes:
SELECT ptc.CustomerId, dc.DocumentContactId
FROM dts.Customer AS ptc
JOIN dbo.vDocumentContactDetail AS dc
ON ptc.CustomerId = dc.DocumentContactId
WHERE ptc.CustomerName <> dc.CompanyName
OR ptc.Line1 <> dc.Line1
OR ptc.Line2 <> dc.Line2
OR ptc.City <> dc.City
OR ptc.State <> dc.StateName
OR ptc.Country <> dc.CountryName
OR ptc.Email <> dc.Email
OR ptc.PhoneNumber <> dc.PhoneNumber
OR ptc.FaxNumber <> dc.FaxNumber
OR dc.FirstName + ' ' + dc.LastName <> ptc.ContactName
That returns the keys that have differences. I don't know if that is
what you need, but maybe it is.
Roy Harvey
Beacon Falls, CT
On Tue, 13 Nov 2007 06:35:23 -0800, Andy <andyj@.med-associates.com>
wrote:

>Hi,
>I have to copy some data from one table to another. Its billing
>information from an invoice to a Customer table.
>The requirement is that if ANY billing information for a customer
>changes, a new Customer record must be created. Currently I have this
>query to do that.
>SELECT ptc.CustomerId, dc.DocumentContactId
>FROM dts.Customer AS ptc
> INNER JOIN dbo.vDocumentContactDetail AS dc ON
> ptc.CustomerName = dc.CompanyName
> AND ptc.Line1 = dc.Line1
> AND ptc.Line2 = dc.Line2
> AND ptc.City = dc.City
> AND ptc.State = dc.StateName
> AND ptc.Country = dc.CountryName
> AND ptc.Email = dc.Email
> AND ptc.PhoneNumber = dc.PhoneNumber
> AND ptc.FaxNumber = dc.FaxNumber
> AND dc.FirstName + ' ' + dc.LastName = ptc.ContactName
>This query is used by another view to get the customer id (which is
>generated later and stored in the Customer table.
>I noticed in the execution plan that there is a hash which is includes
>CompanyName, dc.Email, and I believe the last part of the join, the
>ContactName, although I'm not sure because the Execution plan
>includings [Exp1004] = [Exp1005].
>Any other suggestions to make this more performant? Or am I limited?|||Andy,
If there is no index on any of the columns in the ON-clause, then
hashing is to be expected.
I suggest you experiment a bit with (compound) indexes. Start with the
column that is most selective, and add (other selective) column until
the total (compound) index has enough selectivity to be fast. I wouldn't
be surprised if it is enough to just index column Email, because it is
probably highly selective.
Just add a lot of indexes on both tables, check the query plan to see
which are used, and drop the unused indexes.
HTH,
Gert-Jan
Andy wrote:
> Hi,
> I have to copy some data from one table to another. Its billing
> information from an invoice to a Customer table.
> The requirement is that if ANY billing information for a customer
> changes, a new Customer record must be created. Currently I have this
> query to do that.
> SELECT ptc.CustomerId, dc.DocumentContactId
> FROM dts.Customer AS ptc
> INNER JOIN dbo.vDocumentContactDetail AS dc ON
> ptc.CustomerName = dc.CompanyName
> AND ptc.Line1 = dc.Line1
> AND ptc.Line2 = dc.Line2
> AND ptc.City = dc.City
> AND ptc.State = dc.StateName
> AND ptc.Country = dc.CountryName
> AND ptc.Email = dc.Email
> AND ptc.PhoneNumber = dc.PhoneNumber
> AND ptc.FaxNumber = dc.FaxNumber
> AND dc.FirstName + ' ' + dc.LastName = ptc.ContactName
> This query is used by another view to get the customer id (which is
> generated later and stored in the Customer table.
> I noticed in the execution plan that there is a hash which is includes
> CompanyName, dc.Email, and I believe the last part of the join, the
> ContactName, although I'm not sure because the Execution plan
> includings [Exp1004] = [Exp1005].
> Any other suggestions to make this more performant? Or am I limited?

Best indexing option?

Hi,
I have to copy some data from one table to another. Its billing
information from an invoice to a Customer table.
The requirement is that if ANY billing information for a customer
changes, a new Customer record must be created. Currently I have this
query to do that.
SELECT ptc.CustomerId, dc.DocumentContactId
FROM dts.Customer AS ptc
INNER JOIN dbo.vDocumentContactDetail AS dc ON
ptc.CustomerName = dc.CompanyName
AND ptc.Line1 = dc.Line1
AND ptc.Line2 = dc.Line2
AND ptc.City = dc.City
AND ptc.State = dc.StateName
AND ptc.Country = dc.CountryName
AND ptc.Email = dc.Email
AND ptc.PhoneNumber = dc.PhoneNumber
AND ptc.FaxNumber = dc.FaxNumber
AND dc.FirstName + ' ' + dc.LastName = ptc.ContactName
This query is used by another view to get the customer id (which is
generated later and stored in the Customer table.
I noticed in the execution plan that there is a hash which is includes
CompanyName, dc.Email, and I believe the last part of the join, the
ContactName, although I'm not sure because the Execution plan
includings [Exp1004] = [Exp1005].
Any other suggestions to make this more performant? Or am I limited?I don't quite understand the requirement. From looking at the SELECT
it appears that it looks for a perfect match on everything EXCEPT the
keys, and returns the two keys. I can't say that I've ever seen
anything like that before except when merging two unrelated data
sources or resolving data redundancy problems.
Making a wild guess, I am wondering if the main thing done with
ptc.CustomerId and dc.DocumentContactId when they are returned is
compare them to see if they are the same so that some action (new
Customer record) can be taken if they are not. If so the entire
process appears to me to be inside out.
What I usually see, and write, when I want to check for changes:
SELECT ptc.CustomerId, dc.DocumentContactId
FROM dts.Customer AS ptc
JOIN dbo.vDocumentContactDetail AS dc
ON ptc.CustomerId = dc.DocumentContactId
WHERE ptc.CustomerName <> dc.CompanyName
OR ptc.Line1 <> dc.Line1
OR ptc.Line2 <> dc.Line2
OR ptc.City <> dc.City
OR ptc.State <> dc.StateName
OR ptc.Country <> dc.CountryName
OR ptc.Email <> dc.Email
OR ptc.PhoneNumber <> dc.PhoneNumber
OR ptc.FaxNumber <> dc.FaxNumber
OR dc.FirstName + ' ' + dc.LastName <> ptc.ContactName
That returns the keys that have differences. I don't know if that is
what you need, but maybe it is.
Roy Harvey
Beacon Falls, CT
On Tue, 13 Nov 2007 06:35:23 -0800, Andy <andyj@.med-associates.com>
wrote:
>Hi,
>I have to copy some data from one table to another. Its billing
>information from an invoice to a Customer table.
>The requirement is that if ANY billing information for a customer
>changes, a new Customer record must be created. Currently I have this
>query to do that.
>SELECT ptc.CustomerId, dc.DocumentContactId
>FROM dts.Customer AS ptc
> INNER JOIN dbo.vDocumentContactDetail AS dc ON
> ptc.CustomerName = dc.CompanyName
> AND ptc.Line1 = dc.Line1
> AND ptc.Line2 = dc.Line2
> AND ptc.City = dc.City
> AND ptc.State = dc.StateName
> AND ptc.Country = dc.CountryName
> AND ptc.Email = dc.Email
> AND ptc.PhoneNumber = dc.PhoneNumber
> AND ptc.FaxNumber = dc.FaxNumber
> AND dc.FirstName + ' ' + dc.LastName = ptc.ContactName
>This query is used by another view to get the customer id (which is
>generated later and stored in the Customer table.
>I noticed in the execution plan that there is a hash which is includes
>CompanyName, dc.Email, and I believe the last part of the join, the
>ContactName, although I'm not sure because the Execution plan
>includings [Exp1004] = [Exp1005].
>Any other suggestions to make this more performant? Or am I limited?|||Andy,
If there is no index on any of the columns in the ON-clause, then
hashing is to be expected.
I suggest you experiment a bit with (compound) indexes. Start with the
column that is most selective, and add (other selective) column until
the total (compound) index has enough selectivity to be fast. I wouldn't
be surprised if it is enough to just index column Email, because it is
probably highly selective.
Just add a lot of indexes on both tables, check the query plan to see
which are used, and drop the unused indexes.
HTH,
Gert-Jan
Andy wrote:
> Hi,
> I have to copy some data from one table to another. Its billing
> information from an invoice to a Customer table.
> The requirement is that if ANY billing information for a customer
> changes, a new Customer record must be created. Currently I have this
> query to do that.
> SELECT ptc.CustomerId, dc.DocumentContactId
> FROM dts.Customer AS ptc
> INNER JOIN dbo.vDocumentContactDetail AS dc ON
> ptc.CustomerName = dc.CompanyName
> AND ptc.Line1 = dc.Line1
> AND ptc.Line2 = dc.Line2
> AND ptc.City = dc.City
> AND ptc.State = dc.StateName
> AND ptc.Country = dc.CountryName
> AND ptc.Email = dc.Email
> AND ptc.PhoneNumber = dc.PhoneNumber
> AND ptc.FaxNumber = dc.FaxNumber
> AND dc.FirstName + ' ' + dc.LastName = ptc.ContactName
> This query is used by another view to get the customer id (which is
> generated later and stored in the Customer table.
> I noticed in the execution plan that there is a hash which is includes
> CompanyName, dc.Email, and I believe the last part of the join, the
> ContactName, although I'm not sure because the Execution plan
> includings [Exp1004] = [Exp1005].
> Any other suggestions to make this more performant? Or am I limited?

Best indexes to use

Hello,
I am currently working on some database tables that are suffering from
performance issues and would like some advice on which indexes are best
to use.
There exists a composite key on three fields out of 7 in the table and
these three fields are the ones normally used when searching the table
as well.
1) As most queries return only single answers, would it be better to
add three seperate indexes or a composite index?
2) Also, from my research, most people indicate using composite index
would be better - is this the case?
3) If I use a composite index, would it be used when searching using
only one or two of the composite fields?
Thanks in advance,
jjumblesale wrote:
> Hello,
> I am currently working on some database tables that are suffering from
> performance issues and would like some advice on which indexes are best
> to use.
> There exists a composite key on three fields out of 7 in the table and
> these three fields are the ones normally used when searching the table
> as well.
> 1) As most queries return only single answers, would it be better to
> add three seperate indexes or a composite index?
> 2) Also, from my research, most people indicate using composite index
> would be better - is this the case?
> 3) If I use a composite index, would it be used when searching using
> only one or two of the composite fields?
> Thanks in advance,
> j
>
You can look at the execution plan to see what indexes are being used by
your query. I'll try to give an overly simplified explanation:
Suppose I have a table with 7 columns, Col1 thru Col7.
I have an index named Index1 using Col1, Col2, Col3 as a composite key.
Running the following queries (data volumes/optimizer whims may alter
these behaviors):
-- This will do an index seek using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
Does that make sense? Now let's add another index, Index2, to the
table, using Col2, Col4 as a key:
-- This will now do an index seek using Index2
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index2, with a bookmark lookup to
get the value of Col1, not Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
These are ridiculously simple examples, and when run against real data
volumes, the optimizer may choose a different course of action,
depending on statistics, distribution of values, etc...
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Cheers for your reply - that's great in regards to the composite index
issue i'm having, but in these kinds of scenarios, would I be right in
thinking that a non-clustered index covering the three columns would be
of more use than a clustered index because I am using queries that
bring back small numbers (usually one) result?
j
> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the
> table, using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to
> get the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action,
> depending on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||jumblesale wrote:
> Cheers for your reply - that's great in regards to the composite index
> issue i'm having, but in these kinds of scenarios, would I be right in
> thinking that a non-clustered index covering the three columns would be
> of more use than a clustered index because I am using queries that
> bring back small numbers (usually one) result?
>
I'm going to have to used the canned response of "it depends" on this
one. *GENERALLY* clustered indexes perform best on range scans, BUT,
they can also be used by non-clustered indexes to avoid doing bookmark
lookups. Your best bet is to experiment a little, compare the execution
plan of your query when different indexes are available, see what
performs best given your data.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Tracy's given some nice samples here. I'll just chime in with a little extra
which might help you understand.
Ideally, you want the columns that are being filtered in the query + those
being returned in the select list in the index. If you have many different
queries that run against the table, you might need a few different indexes
to accommodate their various requirements. Don't be afraid of adding more
than one column to an index & also don't be afraid of adding multiple
indexes to a table.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C241.8090806@.realsqlguy.com...
> jumblesale wrote:
>> Hello,
>> I am currently working on some database tables that are suffering from
>> performance issues and would like some advice on which indexes are best
>> to use.
>> There exists a composite key on three fields out of 7 in the table and
>> these three fields are the ones normally used when searching the table
>> as well.
>> 1) As most queries return only single answers, would it be better to
>> add three seperate indexes or a composite index?
>> 2) Also, from my research, most people indicate using composite index
>> would be better - is this the case?
>> 3) If I use a composite index, would it be used when searching using
>> only one or two of the composite fields?
>> Thanks in advance,
>> j
> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the table,
> using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to get
> the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action, depending
> on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||ha! sorry for backing you into a corner there - good advice though,
i'll have to get working on the indexes and check the execution plans -
just wanted a bit of advise before I start playing about on the server.
Thanks for your ideas,
j
Tracy McKibben wrote:
> I'm going to have to used the canned response of "it depends" on this
> one. *GENERALLY* clustered indexes perform best on range scans, BUT,
> they can also be used by non-clustered indexes to avoid doing bookmark
> lookups. Your best bet is to experiment a little, compare the execution
> plan of your query when different indexes are available, see what
> performs best given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Hi Tracy
Re >*GENERALLY* clustered indexes perform best on range scans<
I'd love to hear your thoughts on this blog I did yesterday.
http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C596.30004@.realsqlguy.com...
> jumblesale wrote:
>> Cheers for your reply - that's great in regards to the composite index
>> issue i'm having, but in these kinds of scenarios, would I be right in
>> thinking that a non-clustered index covering the three columns would be
>> of more use than a clustered index because I am using queries that
>> bring back small numbers (usually one) result?
> I'm going to have to used the canned response of "it depends" on this one.
> *GENERALLY* clustered indexes perform best on range scans, BUT, they can
> also be used by non-clustered indexes to avoid doing bookmark lookups.
> Your best bet is to experiment a little, compare the execution plan of
> your query when different indexes are available, see what performs best
> given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Greg Linwood wrote:
> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx
>
I have to admit, I've never actually compared clustered vs.
non-clustered indexes in the detail that you have here. I'm guilty of
simply repeating what I've picked up from other sources. Your analysis
makes perfect sense to me, but I'm curious... Your example assumes
we're returning a subset of columns from every row in the table.
Suppose our table contains a million rows, clustered on a "SessionID"
column. Session ID's increment throughout the day. A typical session
contains 100-150 rows, and I want to return every column from the table
for the five sessions that occurred yesterday. Without having hard
numbers to back me up, it seems like it the clustered index wins in this
case. Or would you not consider this to be a "range" scan?
I think it's going to depend on how wide your query is and what sort of
data you're pulling back - can it be covered with a non-clustered index?
Either way, it's a well written article and it certainly has me thinking...
--
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Hi Greg,
Just a quick question - because I don't have a unique primary key and
require the use of a composite key on the table, would it potentially
cause a performance hit if I use composite NCIX only and have no CIX on
the table?
Cheers for your posts btw - very interesting blog article too - it
confirmed some tests I thought i'd done incorrectly in a sql refresher
course I attended last year!
j
Greg Linwood wrote:
> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
>|||jumblesale wrote:
> ha! sorry for backing you into a corner there - good advice though,
> i'll have to get working on the indexes and check the execution plans -
> just wanted a bit of advise before I start playing about on the server.
>
I spend most of my time sitting in the corner... :-)
There's just no way to give a definitive answer without having intimate
knowledge of the data involved, and being able to work with it
first-hand. Good luck!
Tracy McKibben
MCDBA
http://www.realsqlguy.com

Best indexes to use

Hello,
I am currently working on some database tables that are suffering from
performance issues and would like some advice on which indexes are best
to use.
There exists a composite key on three fields out of 7 in the table and
these three fields are the ones normally used when searching the table
as well.
1) As most queries return only single answers, would it be better to
add three seperate indexes or a composite index?
2) Also, from my research, most people indicate using composite index
would be better - is this the case?
3) If I use a composite index, would it be used when searching using
only one or two of the composite fields?
Thanks in advance,
jjumblesale wrote:
> Hello,
> I am currently working on some database tables that are suffering from
> performance issues and would like some advice on which indexes are best
> to use.
> There exists a composite key on three fields out of 7 in the table and
> these three fields are the ones normally used when searching the table
> as well.
> 1) As most queries return only single answers, would it be better to
> add three seperate indexes or a composite index?
> 2) Also, from my research, most people indicate using composite index
> would be better - is this the case?
> 3) If I use a composite index, would it be used when searching using
> only one or two of the composite fields?
> Thanks in advance,
> j
>
You can look at the execution plan to see what indexes are being used by
your query. I'll try to give an overly simplified explanation:
Suppose I have a table with 7 columns, Col1 thru Col7.
I have an index named Index1 using Col1, Col2, Col3 as a composite key.
Running the following queries (data volumes/optimizer whims may alter
these behaviors):
-- This will do an index seek using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
Does that make sense? Now let's add another index, Index2, to the
table, using Col2, Col4 as a key:
-- This will now do an index seek using Index2
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index2, with a bookmark lookup to
get the value of Col1, not Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
These are ridiculously simple examples, and when run against real data
volumes, the optimizer may choose a different course of action,
depending on statistics, distribution of values, etc...
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Cheers for your reply - that's great in regards to the composite index
issue i'm having, but in these kinds of scenarios, would I be right in
thinking that a non-clustered index covering the three columns would be
of more use than a clustered index because I am using queries that
bring back small numbers (usually one) result?
j

> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the
> table, using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to
> get the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action,
> depending on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||jumblesale wrote:
> Cheers for your reply - that's great in regards to the composite index
> issue i'm having, but in these kinds of scenarios, would I be right in
> thinking that a non-clustered index covering the three columns would be
> of more use than a clustered index because I am using queries that
> bring back small numbers (usually one) result?
>
I'm going to have to used the canned response of "it depends" on this
one. *GENERALLY* clustered indexes perform best on range scans, BUT,
they can also be used by non-clustered indexes to avoid doing bookmark
lookups. Your best bet is to experiment a little, compare the execution
plan of your query when different indexes are available, see what
performs best given your data.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Tracy's given some nice samples here. I'll just chime in with a little extra
which might help you understand.
Ideally, you want the columns that are being filtered in the query + those
being returned in the select list in the index. If you have many different
queries that run against the table, you might need a few different indexes
to accommodate their various requirements. Don't be afraid of adding more
than one column to an index & also don't be afraid of adding multiple
indexes to a table.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C241.8090806@.realsqlguy.com...
> jumblesale wrote:
> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the table,
> using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to get
> the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action, depending
> on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Hi Tracy
Re >*GENERALLY* clustered indexes perform best on range scans<
I'd love to hear your thoughts on this blog I did yesterday.
http://blogs.sqlserver.org.au/blogs.../09/11/365.aspx
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C596.30004@.realsqlguy.com...
> jumblesale wrote:
> I'm going to have to used the canned response of "it depends" on this one.
> *GENERALLY* clustered indexes perform best on range scans, BUT, they can
> also be used by non-clustered indexes to avoid doing bookmark lookups.
> Your best bet is to experiment a little, compare the execution plan of
> your query when different indexes are available, see what performs best
> given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||ha! sorry for backing you into a corner there - good advice though,
i'll have to get working on the indexes and check the execution plans -
just wanted a bit of advise before I start playing about on the server.
Thanks for your ideas,
j
Tracy McKibben wrote:

> I'm going to have to used the canned response of "it depends" on this
> one. *GENERALLY* clustered indexes perform best on range scans, BUT,
> they can also be used by non-clustered indexes to avoid doing bookmark
> lookups. Your best bet is to experiment a little, compare the execution
> plan of your query when different indexes are available, see what
> performs best given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Greg Linwood wrote:
> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> [url]http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx[/ur
l]
>
I have to admit, I've never actually compared clustered vs.
non-clustered indexes in the detail that you have here. I'm guilty of
simply repeating what I've picked up from other sources. Your analysis
makes perfect sense to me, but I'm curious... Your example assumes
we're returning a subset of columns from every row in the table.
Suppose our table contains a million rows, clustered on a "SessionID"
column. Session ID's increment throughout the day. A typical session
contains 100-150 rows, and I want to return every column from the table
for the five sessions that occurred yesterday. Without having hard
numbers to back me up, it seems like it the clustered index wins in this
case. Or would you not consider this to be a "range" scan?
I think it's going to depend on how wide your query is and what sort of
data you're pulling back - can it be covered with a non-clustered index?
Either way, it's a well written article and it certainly has me thinking...
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Hi Greg,
Just a quick question - because I don't have a unique primary key and
require the use of a composite key on the table, would it potentially
cause a performance hit if I use composite NCIX only and have no CIX on
the table?
Cheers for your posts btw - very interesting blog article too - it
confirmed some tests I thought i'd done incorrectly in a sql refresher
course I attended last year!
j
Greg Linwood wrote:

> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> http://blogs.sqlserver.org.au/blogs...gs/greg_linwood
>|||jumblesale wrote:
> ha! sorry for backing you into a corner there - good advice though,
> i'll have to get working on the indexes and check the execution plans -
> just wanted a bit of advise before I start playing about on the server.
>
I spend most of my time sitting in the corner... :-)
There's just no way to give a definitive answer without having intimate
knowledge of the data involved, and being able to work with it
first-hand. Good luck!
Tracy McKibben
MCDBA
http://www.realsqlguy.com

Best image format to use?

I wan to insert our company logo on some of the srs reports, could someone
recommend the best format to use? I want the image to look clean when
printed as well.On May 14, 5:11 pm, Chris Patten
<ChrisPat...@.discussions.microsoft.com> wrote:
> I wan to insert our company logo on some of the srs reports, could someone
> recommend the best format to use? I want the image to look clean when
> printed as well.
Usually JPEG/JPG and GIF are the best to use due to smaller size (in
comparison to MPEG, etc).
Regards,
Enrique Martinez
Sr. Software Consultant|||I've had problems with gifs, especially on pdf exporting. jpeg seems
to work passably.

Best image format for screen & print?

We are having real problems getting external report (from a URL) images of a
good quality consistently for both screen and PDF print.
Can anyone advise as to the best image format, sizing option, resolution,
etc. for SQL 2K reporting services SP1?
Thanks,
ChrisChris,
I find it best to design to the printed page size.
I'm using A4 landscape generally, I guess it would apply equally well
to US Letter.
I set the page width to 29.7cm, page height to 21cm - you need to make
sure this is set right so that the PDF document is paginated correctly.
I use margins of 0.5cm all round (NB:Some printers may not be able to
handle margins this small all the way round).
The report body is then set to 28.7cm (page width - left & right
margin). If you ever make this wider, i.e. by increasing the width of a
table beyond this, it will start splitting onto 2 pages in PDF. The
report body MUST fit inside the printable width, this is the common
cause of blank pages printing out.
I have found that these reports translate well for HTML display on a
1024x768 res. screen. The only caveat is the page header scrolling off
the screen. If you limit the number of rows per page to suit the screen
display, you'll end up with loads of empty space on the printed pages!
Hope that helps,
Chris.
Chris Lewis wrote:
> We are having real problems getting external report (from a URL)
> images of a good quality consistently for both screen and PDF print.
> Can anyone advise as to the best image format, sizing option,
> resolution, etc. for SQL 2K reporting services SP1?
> Thanks,
> Chris|||Thanks Chris.
Can you advise on the types of images that you embed or reference in your
reports - jpegs, gifs, etc. This is what we are having real difficulty
with - our images either look good on screen (HTML) *or* in PDF - we don't
seem to be able to get both to look good with the same image?
Thanks,
Chris
"Chris McGuigan" <chris.mcguigan@.zycko.com> wrote in message
news:uCbIRZoEFHA.3992@.tk2msftngp13.phx.gbl...
> Chris,
> I find it best to design to the printed page size.
> I'm using A4 landscape generally, I guess it would apply equally well
> to US Letter.
> I set the page width to 29.7cm, page height to 21cm - you need to make
> sure this is set right so that the PDF document is paginated correctly.
> I use margins of 0.5cm all round (NB:Some printers may not be able to
> handle margins this small all the way round).
> The report body is then set to 28.7cm (page width - left & right
> margin). If you ever make this wider, i.e. by increasing the width of a
> table beyond this, it will start splitting onto 2 pages in PDF. The
> report body MUST fit inside the printable width, this is the common
> cause of blank pages printing out.
> I have found that these reports translate well for HTML display on a
> 1024x768 res. screen. The only caveat is the page header scrolling off
> the screen. If you limit the number of rows per page to suit the screen
> display, you'll end up with loads of empty space on the printed pages!
> Hope that helps,
> Chris.
> Chris Lewis wrote:
> > We are having real problems getting external report (from a URL)
> > images of a good quality consistently for both screen and PDF print.
> >
> > Can anyone advise as to the best image format, sizing option,
> > resolution, etc. for SQL 2K reporting services SP1?
> >
> > Thanks,
> >
> > Chris
>|||I find GIFs seem to scale better, but if your images are predominately
photographic JPEG will probably perform better. I design them so that
they are shown at their native size on the screen which means they are
normally scaled up a bit on PDF. The trick is to avoid images being
scaled up heavily from their original size. This might mean larger
image files, but you don't get something for nothing!
I'm generally using images as backgrounds, but if you're using the
image control then avoid setting the 'Sizing' property to 'Fit' or 'Fit
Proportional' use 'AutoSize' or 'Clip'. That way you avoid rescaling.
If that's not possible then always use 'Fit Proportional'.
Regards,
Chris.
Chris Lewis wrote:
> Thanks Chris.
> Can you advise on the types of images that you embed or reference in
> your reports - jpegs, gifs, etc. This is what we are having real
> difficulty with - our images either look good on screen (HTML) or in
> PDF - we don't seem to be able to get both to look good with the same
> image?
> Thanks,
> Chris
> "Chris McGuigan" <chris.mcguigan@.zycko.com> wrote in message
> news:uCbIRZoEFHA.3992@.tk2msftngp13.phx.gbl...
> > Chris,
> > I find it best to design to the printed page size.
> > I'm using A4 landscape generally, I guess it would apply equally
> > well to US Letter.
> > I set the page width to 29.7cm, page height to 21cm - you need to
> > make sure this is set right so that the PDF document is paginated
> > correctly.
> >
> > I use margins of 0.5cm all round (NB:Some printers may not be able
> > to handle margins this small all the way round).
> > The report body is then set to 28.7cm (page width - left & right
> > margin). If you ever make this wider, i.e. by increasing the width
> > of a table beyond this, it will start splitting onto 2 pages in
> > PDF. The report body MUST fit inside the printable width, this is
> > the common cause of blank pages printing out.
> >
> > I have found that these reports translate well for HTML display on a
> > 1024x768 res. screen. The only caveat is the page header scrolling
> > off the screen. If you limit the number of rows per page to suit
> > the screen display, you'll end up with loads of empty space on the
> > printed pages!
> >
> > Hope that helps,
> >
> > Chris.
> >
> > Chris Lewis wrote:
> >
> > > We are having real problems getting external report (from a URL)
> > > images of a good quality consistently for both screen and PDF
> > > print.
> > >
> > > Can anyone advise as to the best image format, sizing option,
> > > resolution, etc. for SQL 2K reporting services SP1?
> > >
> > > Thanks,
> > >
> > > Chris
> >|||Thanks again Chris, I'll go away and try out your suggestions.
Chris
"Chris McGuigan" <chris.mcguigan@.zycko.com> wrote in message
news:#YOJZwoEFHA.1392@.tk2msftngp13.phx.gbl...
> I find GIFs seem to scale better, but if your images are predominately
> photographic JPEG will probably perform better. I design them so that
> they are shown at their native size on the screen which means they are
> normally scaled up a bit on PDF. The trick is to avoid images being
> scaled up heavily from their original size. This might mean larger
> image files, but you don't get something for nothing!
> I'm generally using images as backgrounds, but if you're using the
> image control then avoid setting the 'Sizing' property to 'Fit' or 'Fit
> Proportional' use 'AutoSize' or 'Clip'. That way you avoid rescaling.
> If that's not possible then always use 'Fit Proportional'.
> Regards,
> Chris.
> Chris Lewis wrote:
> > Thanks Chris.
> >
> > Can you advise on the types of images that you embed or reference in
> > your reports - jpegs, gifs, etc. This is what we are having real
> > difficulty with - our images either look good on screen (HTML) or in
> > PDF - we don't seem to be able to get both to look good with the same
> > image?
> >
> > Thanks,
> >
> > Chris
> >
> > "Chris McGuigan" <chris.mcguigan@.zycko.com> wrote in message
> > news:uCbIRZoEFHA.3992@.tk2msftngp13.phx.gbl...
> > > Chris,
> > > I find it best to design to the printed page size.
> > > I'm using A4 landscape generally, I guess it would apply equally
> > > well to US Letter.
> > > I set the page width to 29.7cm, page height to 21cm - you need to
> > > make sure this is set right so that the PDF document is paginated
> > > correctly.
> > >
> > > I use margins of 0.5cm all round (NB:Some printers may not be able
> > > to handle margins this small all the way round).
> > > The report body is then set to 28.7cm (page width - left & right
> > > margin). If you ever make this wider, i.e. by increasing the width
> > > of a table beyond this, it will start splitting onto 2 pages in
> > > PDF. The report body MUST fit inside the printable width, this is
> > > the common cause of blank pages printing out.
> > >
> > > I have found that these reports translate well for HTML display on a
> > > 1024x768 res. screen. The only caveat is the page header scrolling
> > > off the screen. If you limit the number of rows per page to suit
> > > the screen display, you'll end up with loads of empty space on the
> > > printed pages!
> > >
> > > Hope that helps,
> > >
> > > Chris.
> > >
> > > Chris Lewis wrote:
> > >
> > > > We are having real problems getting external report (from a URL)
> > > > images of a good quality consistently for both screen and PDF
> > > > print.
> > > >
> > > > Can anyone advise as to the best image format, sizing option,
> > > > resolution, etc. for SQL 2K reporting services SP1?
> > > >
> > > > Thanks,
> > > >
> > > > Chris
> > >
>

Best hw recommendation for a sql server - server.

At this time, i have a server running sql server 2000 sp3 on a windows2000
server sp4. this server is running slow, so I need to recommend a new server
,
buy I don′t know how to estimate the best hw tha will help us.
Do some body know a tool that can help me to estimate the best hw for our
system requierements'
Thanks a lot for your help.> At this time, i have a server running sql server 2000 sp3 on a windows2000
> server sp4. this server is running slow, so I need to recommend a new
> server,
> buy I don′t know how to estimate the best hw tha will help us.
Why do you think the slowness is due to hardware problems?
The most monumental increases we have realized in the past performance over
the past year, were from:
(a) installing SQL Server 2000 SP4 (huge gain!)
(b) optimizing indexes, statistics and procedure code|||Have you already eliminated, deadlocking, poor indexing etc before deciding
to purchase new ...? If not throwing hardware at a problem will only be a
short term solution.
Find out the budget and work back from there...
HTH. Ryan
"Maria Guzman" <MariaGuzman@.discussions.microsoft.com> wrote in message
news:EDFA0CF8-51A1-4CC3-AA14-9F36D255DF2D@.microsoft.com...
> At this time, i have a server running sql server 2000 sp3 on a windows2000
> server sp4. this server is running slow, so I need to recommend a new
> server,
> buy I dont know how to estimate the best hw tha will help us.
> Do some body know a tool that can help me to estimate the best hw for our
> system requierements'
> Thanks a lot for your help.|||Have you used Performance Monitor to confirm that the bottleneck is your
server? Replacing the hardware may seem like a quick fix compared to
analyzing and re-programming the application, but if you shell out the
$$,$$$ and the problem is still not solved, then you end up looking really
bad.
Performance Monitor:
http://www.sql-server-performance.c...&seqNum=28&rl=1
How to Perform a SQL Server Performance Audit
http://www.sql-server-performance.c...mance_audit.asp
Checklist: SQL Server Performance
http://msdn.microsoft.com/library/d...
etcheck08.asp
There may still be an easy fix related to resolving deadlocks, logical or
disk defragmenting, or simply adding a crucially needed index.
http://support.microsoft.com/defaul...kb;en-us;832524
Microsoft SQL Server 2000 Index Defragmentation Best Practices
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
How To: Optimize SQL Indexes
http://msdn.microsoft.com/library/d...
etHowTo03.asp
"Maria Guzman" <MariaGuzman@.discussions.microsoft.com> wrote in message
news:EDFA0CF8-51A1-4CC3-AA14-9F36D255DF2D@.microsoft.com...
> At this time, i have a server running sql server 2000 sp3 on a windows2000
> server sp4. this server is running slow, so I need to recommend a new
> server,
> buy I dont know how to estimate the best hw tha will help us.
> Do some body know a tool that can help me to estimate the best hw for our
> system requierements'
> Thanks a lot for your help.|||Thanks a lot for your recommendatio. I applyed some of them. The other thing
that I need help to estimate the best hw for a sql server is that here in th
e
office they want to change the server because they want to implement a
Cluster. So I have to recommend a new hw requirements.
Do you know any formula that can help to to estimate that' or can you tell
me what issue I need to consider to analyze that.
Thanks a lot for your help.
"JT" wrote:

> Have you used Performance Monitor to confirm that the bottleneck is your
> server? Replacing the hardware may seem like a quick fix compared to
> analyzing and re-programming the application, but if you shell out the
> $$,$$$ and the problem is still not solved, then you end up looking really
> bad.
> Performance Monitor:
> http://www.sql-server-performance.c...ver.as
p
> Monitoring - Performance Monitor
> http://www.informit.com/guides/cont...&seqNum=28&rl=1
> How to Perform a SQL Server Performance Audit
> http://www.sql-server-performance.c...mance_audit.asp
> Checklist: SQL Server Performance
> http://msdn.microsoft.com/library/d...enetcheck08.asp
> There may still be an easy fix related to resolving deadlocks, logical or
> disk defragmenting, or simply adding a crucially needed index.
> http://support.microsoft.com/defaul...kb;en-us;832524
> Microsoft SQL Server 2000 Index Defragmentation Best Practices
> http://www.microsoft.com/technet/pr...eNetHowTo03.asp
> "Maria Guzman" <MariaGuzman@.discussions.microsoft.com> wrote in message
> news:EDFA0CF8-51A1-4CC3-AA14-9F36D255DF2D@.microsoft.com...
>
>

Best GUID Storage

Because of the problem getting IDENTITY primary key values back when
inserting batches of rows, I would like to experiment with using
GUIDs. Within an application, I would like to assign the primary keys
to the rows and then pass them into the INSERT statements. Then I
wouldn't have to worry about using triggers or Identity scope to
determine the new primary keys.
My question is basically, what's the best datatype to store the GUIDs
in the column? From what I've read so far, it looks like
UniqueIdentifier or CHAR(40) are my options. Is there any drawback to
using UniqueIdentifier?If you are storing a GUID, then why not use Uniqueidentifier data type.
In SQL Server Books Online, read the page titled "Using uniqueidentifier
Data". This page discusses the advantages and disadvantages of this
datatype.
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"- TW" <Thumper@.kqrsrocks.com> wrote in message
news:151a6e6b.0402231323.29f24d45@.posting.google.com...
Because of the problem getting IDENTITY primary key values back when
inserting batches of rows, I would like to experiment with using
GUIDs. Within an application, I would like to assign the primary keys
to the rows and then pass them into the INSERT statements. Then I
wouldn't have to worry about using triggers or Identity scope to
determine the new primary keys.
My question is basically, what's the best datatype to store the GUIDs
in the column? From what I've read so far, it looks like
UniqueIdentifier or CHAR(40) are my options. Is there any drawback to
using UniqueIdentifier?|||TW,
I've used both, and it nearly always comes down to interoperability.
Some systems cannot deal with binary so you have to go varchar(36)/char(36).
Where did you get 40, incidentally?
James Hokes
"- TW" <Thumper@.kqrsrocks.com> wrote in message
news:151a6e6b.0402231323.29f24d45@.posting.google.com...
> Because of the problem getting IDENTITY primary key values back when
> inserting batches of rows, I would like to experiment with using
> GUIDs. Within an application, I would like to assign the primary keys
> to the rows and then pass them into the INSERT statements. Then I
> wouldn't have to worry about using triggers or Identity scope to
> determine the new primary keys.
> My question is basically, what's the best datatype to store the GUIDs
> in the column? From what I've read so far, it looks like
> UniqueIdentifier or CHAR(40) are my options. Is there any drawback to
> using UniqueIdentifier?|||Excellent choice to use Guids, imho.
Vyas has already pointed you to a good article on the topic, but here are a
couple of extra things not mentioned in that article:
(a) A benefit of using Guids instead of Identities is that if you ever need
to implement horizontal partitioning on the table, it will be substantially
easier with Guids. With Guids, the partitioning process is virtually
seemless to the application but partitioning tables with Identities nearly
always breaks the application.
(b) On the other hand, a problem with using Guids which is not mentioned in
that article is that T-SQL has no ISGUID() type function which causes minor
coding issues. Of course, it's possible to roll your own though.
Regards,
Greg Linwood
SQL Server MVP
"- TW" <Thumper@.kqrsrocks.com> wrote in message
news:151a6e6b.0402231323.29f24d45@.posting.google.com...
> Because of the problem getting IDENTITY primary key values back when
> inserting batches of rows, I would like to experiment with using
> GUIDs. Within an application, I would like to assign the primary keys
> to the rows and then pass them into the INSERT statements. Then I
> wouldn't have to worry about using triggers or Identity scope to
> determine the new primary keys.
> My question is basically, what's the best datatype to store the GUIDs
> in the column? From what I've read so far, it looks like
> UniqueIdentifier or CHAR(40) are my options. Is there any drawback to
> using UniqueIdentifier?

Best GUID Storage

Because of the problem getting IDENTITY primary key values back when
inserting batches of rows, I would like to experiment with using
GUIDs. Within an application, I would like to assign the primary keys
to the rows and then pass them into the INSERT statements. Then I
wouldn't have to worry about using triggers or Identity scope to
determine the new primary keys.
My question is basically, what's the best datatype to store the GUIDs
in the column? From what I've read so far, it looks like
UniqueIdentifier or CHAR(40) are my options. Is there any drawback to
using UniqueIdentifier?If you are storing a GUID, then why not use Uniqueidentifier data type.
In SQL Server Books Online, read the page titled "Using uniqueidentifier
Data". This page discusses the advantages and disadvantages of this
datatype.
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"- TW" <Thumper@.kqrsrocks.com> wrote in message
news:151a6e6b.0402231323.29f24d45@.posting.google.com...
Because of the problem getting IDENTITY primary key values back when
inserting batches of rows, I would like to experiment with using
GUIDs. Within an application, I would like to assign the primary keys
to the rows and then pass them into the INSERT statements. Then I
wouldn't have to worry about using triggers or Identity scope to
determine the new primary keys.
My question is basically, what's the best datatype to store the GUIDs
in the column? From what I've read so far, it looks like
UniqueIdentifier or CHAR(40) are my options. Is there any drawback to
using UniqueIdentifier?|||TW,
I've used both, and it nearly always comes down to interoperability.
Some systems cannot deal with binary so you have to go varchar(36)/char(36).
Where did you get 40, incidentally?
James Hokes
"- TW" <Thumper@.kqrsrocks.com> wrote in message
news:151a6e6b.0402231323.29f24d45@.posting.google.com...
> Because of the problem getting IDENTITY primary key values back when
> inserting batches of rows, I would like to experiment with using
> GUIDs. Within an application, I would like to assign the primary keys
> to the rows and then pass them into the INSERT statements. Then I
> wouldn't have to worry about using triggers or Identity scope to
> determine the new primary keys.
> My question is basically, what's the best datatype to store the GUIDs
> in the column? From what I've read so far, it looks like
> UniqueIdentifier or CHAR(40) are my options. Is there any drawback to
> using UniqueIdentifier?|||Excellent choice to use Guids, imho.
Vyas has already pointed you to a good article on the topic, but here are a
couple of extra things not mentioned in that article:
(a) A benefit of using Guids instead of Identities is that if you ever need
to implement horizontal partitioning on the table, it will be substantially
easier with Guids. With Guids, the partitioning process is virtually
seemless to the application but partitioning tables with Identities nearly
always breaks the application.
(b) On the other hand, a problem with using Guids which is not mentioned in
that article is that T-SQL has no ISGUID() type function which causes minor
coding issues. Of course, it's possible to roll your own though.
Regards,
Greg Linwood
SQL Server MVP
"- TW" <Thumper@.kqrsrocks.com> wrote in message
news:151a6e6b.0402231323.29f24d45@.posting.google.com...
> Because of the problem getting IDENTITY primary key values back when
> inserting batches of rows, I would like to experiment with using
> GUIDs. Within an application, I would like to assign the primary keys
> to the rows and then pass them into the INSERT statements. Then I
> wouldn't have to worry about using triggers or Identity scope to
> determine the new primary keys.
> My question is basically, what's the best datatype to store the GUIDs
> in the column? From what I've read so far, it looks like
> UniqueIdentifier or CHAR(40) are my options. Is there any drawback to
> using UniqueIdentifier?

Best Front End

Hello,

I am new to SQL Server. I currently have an Access 2007 database and next year my client would like to migrate to SQL Server. Which Front End is the best...Access, VB.net or some other platform altogether?

Thanks,

M. Wilson

This is not an SSIS question. Moving to the "Getting started with SQL Server" forum.|||

There is no right answer to that as it really is more subjective and based on whatever skill sets are available to the business. Pretty much any front end will work fine if it is coded correctly.

On the other hand, many DBAs would say no front end is the best front end

-Sue

|||

As Sue already mentioned, this is a tough question and should be based on the environment and the business case. There is not only the decision which front end technique should be used, also which architecture should be used (tier based) / thin client / thick client, security, etc. Some decision might eliminate the usage of certain clients....

Jens K. Suessmeyer

http://www.sqlserver2005.de

|||Deployment issues and costs also play a role in determining the best development tools. If you have heterogenous clients (a mixture of windows, linux, and mac clients) perhaps a web-based front end is best. Do the clients aready have MS Access? If not perhaps a .NET application is best.

As Sue and Jens indicated, what is best for your environment really depends on your environment.

HTH...

Joe

Best Front End

Hello,

I am new to SQL Server. I currently have an Access 2007 database and next year my client would like to migrate to SQL Server. Which Front End is the best...Access, VB.net or some other platform altogether?

Thanks,

M. Wilson

This is not an SSIS question. Moving to the "Getting started with SQL Server" forum.|||

There is no right answer to that as it really is more subjective and based on whatever skill sets are available to the business. Pretty much any front end will work fine if it is coded correctly.

On the other hand, many DBAs would say no front end is the best front end

-Sue

|||

As Sue already mentioned, this is a tough question and should be based on the environment and the business case. There is not only the decision which front end technique should be used, also which architecture should be used (tier based) / thin client / thick client, security, etc. Some decision might eliminate the usage of certain clients....

Jens K. Suessmeyer

http://www.sqlserver2005.de

|||Deployment issues and costs also play a role in determining the best development tools. If you have heterogenous clients (a mixture of windows, linux, and mac clients) perhaps a web-based front end is best. Do the clients aready have MS Access? If not perhaps a .NET application is best.

As Sue and Jens indicated, what is best for your environment really depends on your environment.

HTH...

Joe

Best filtering solution for performance?

Hello,

i have a report with approximately 20 reportparameters from queries. This is really slowing down the system, although the dataload (rows sent back) is not that huge. I read on the internet that there are basicly three approaches.

1, Query parameter with alot of diffrent datasources

2, Table filtering

3, Stored procedures

I wonder which approach is the best and why it takes such a long time to get the report with the report parameters (before generating the report itself).

Thank you for your help! Smile

I would think stored procedures would be the best approach. Have you tried using SQL profile to trace where the slowdown is?

cheers,

Andrew

|||

Hello,

thank you for you fast reponse! I now changed to Stored Procedures, i thin in long term it is better to use SP. But now i Have problems with multivalued parameters, which will be sent as an array to SQL Server. I found some articles about this matter an will try to solve it. The performance problem seems to be solved although I don't know why. Will look into that later. Thanks fpr the profiler tip! Best regards! Smile

Best Export format for long term compatibility

We have a couple of databases about 5G each and around 300 tables in each
databases.We have to keep a yealry backup for 6 years. Also we might be
converting to Oracle in about 2 years. If we want to store the SQL Server
tables information in a compatible format that would be restorable in a
couple of years, what would be the recommended Export format ?
Thanks
Delimited ASCII text is just about the only format that never seems to go
out of style...
Adam Machanic
SQL Server MVP
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
message news:D55CFF19-39E5-4707-B515-079A7F5EAA4B@.microsoft.com...
> We have a couple of databases about 5G each and around 300 tables in each
> databases.We have to keep a yealry backup for 6 years. Also we might be
> converting to Oracle in about 2 years. If we want to store the SQL Server
> tables information in a compatible format that would be restorable in a
> couple of years, what would be the recommended Export format ?
> Thanks
|||Adam,
Thanks for the info. But when I choose ASCII Text as an output format it
seems to only pickup one table at a time. Is thre any way to specify all
tables or multiple tables during the Export ?
Thanks
"Adam Machanic" wrote:

> Delimited ASCII text is just about the only format that never seems to go
> out of style...
>
> --
> Adam Machanic
> SQL Server MVP
> Author, "Expert SQL Server 2005 Development"
> http://www.apress.com/book/bookDisplay.html?bID=10220
>
> "SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
> message news:D55CFF19-39E5-4707-B515-079A7F5EAA4B@.microsoft.com...
>
|||My suggestion for ASCII text was really more of a joke than anything
else--suggesting the one format you KNOW will still work. But based on what
you posted, I would just stick with a standard SQL Server backup. You can
always re-install a copy of SQL Server and get the data from there, even if
you switch to Oracle.

Adam Machanic
SQL Server MVP
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
message news:1CA39E77-19FB-47EC-8E22-CBE6348A5647@.microsoft.com...[vbcol=seagreen]
> Adam,
> Thanks for the info. But when I choose ASCII Text as an output format it
> seems to only pickup one table at a time. Is thre any way to specify all
> tables or multiple tables during the Export ?
> Thanks
> "Adam Machanic" wrote:

Best Export format for long term compatibility

We have a couple of databases about 5G each and around 300 tables in each
databases.We have to keep a yealry backup for 6 years. Also we might be
converting to Oracle in about 2 years. If we want to store the SQL Server
tables information in a compatible format that would be restorable in a
couple of years, what would be the recommended Export format ?
ThanksDelimited ASCII text is just about the only format that never seems to go
out of style...
Adam Machanic
SQL Server MVP
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
message news:D55CFF19-39E5-4707-B515-079A7F5EAA4B@.microsoft.com...
> We have a couple of databases about 5G each and around 300 tables in each
> databases.We have to keep a yealry backup for 6 years. Also we might be
> converting to Oracle in about 2 years. If we want to store the SQL Server
> tables information in a compatible format that would be restorable in a
> couple of years, what would be the recommended Export format ?
> Thanks|||Adam,
Thanks for the info. But when I choose ASCII Text as an output format it
seems to only pickup one table at a time. Is thre any way to specify all
tables or multiple tables during the Export ?
Thanks
"Adam Machanic" wrote:
> Delimited ASCII text is just about the only format that never seems to go
> out of style...
>
> --
> Adam Machanic
> SQL Server MVP
> Author, "Expert SQL Server 2005 Development"
> http://www.apress.com/book/bookDisplay.html?bID=10220
>
> "SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
> message news:D55CFF19-39E5-4707-B515-079A7F5EAA4B@.microsoft.com...
> > We have a couple of databases about 5G each and around 300 tables in each
> > databases.We have to keep a yealry backup for 6 years. Also we might be
> > converting to Oracle in about 2 years. If we want to store the SQL Server
> > tables information in a compatible format that would be restorable in a
> > couple of years, what would be the recommended Export format ?
> > Thanks
>|||My suggestion for ASCII text was really more of a joke than anything
else--suggesting the one format you KNOW will still work. But based on what
you posted, I would just stick with a standard SQL Server backup. You can
always re-install a copy of SQL Server and get the data from there, even if
you switch to Oracle.
Adam Machanic
SQL Server MVP
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
message news:1CA39E77-19FB-47EC-8E22-CBE6348A5647@.microsoft.com...
> Adam,
> Thanks for the info. But when I choose ASCII Text as an output format it
> seems to only pickup one table at a time. Is thre any way to specify all
> tables or multiple tables during the Export ?
> Thanks
> "Adam Machanic" wrote:
>> Delimited ASCII text is just about the only format that never seems to go
>> out of style...
>>
>> --
>> Adam Machanic
>> SQL Server MVP
>> Author, "Expert SQL Server 2005 Development"
>> http://www.apress.com/book/bookDisplay.html?bID=10220
>>
>> "SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
>> message news:D55CFF19-39E5-4707-B515-079A7F5EAA4B@.microsoft.com...
>> > We have a couple of databases about 5G each and around 300 tables in
>> > each
>> > databases.We have to keep a yealry backup for 6 years. Also we might be
>> > converting to Oracle in about 2 years. If we want to store the SQL
>> > Server
>> > tables information in a compatible format that would be restorable in a
>> > couple of years, what would be the recommended Export format ?
>> > Thanks

Best Export format for long term compatibility

We have a couple of databases about 5G each and around 300 tables in each
databases.We have to keep a yealry backup for 6 years. Also we might be
converting to Oracle in about 2 years. If we want to store the SQL Server
tables information in a compatible format that would be restorable in a
couple of years, what would be the recommended Export format ?
ThanksDelimited ASCII text is just about the only format that never seems to go
out of style...
Adam Machanic
SQL Server MVP
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
message news:D55CFF19-39E5-4707-B515-079A7F5EAA4B@.microsoft.com...
> We have a couple of databases about 5G each and around 300 tables in each
> databases.We have to keep a yealry backup for 6 years. Also we might be
> converting to Oracle in about 2 years. If we want to store the SQL Server
> tables information in a compatible format that would be restorable in a
> couple of years, what would be the recommended Export format ?
> Thanks|||Adam,
Thanks for the info. But when I choose ASCII Text as an output format it
seems to only pickup one table at a time. Is thre any way to specify all
tables or multiple tables during the Export ?
Thanks
"Adam Machanic" wrote:

> Delimited ASCII text is just about the only format that never seems to go
> out of style...
>
> --
> Adam Machanic
> SQL Server MVP
> Author, "Expert SQL Server 2005 Development"
> http://www.apress.com/book/bookDisplay.html?bID=10220
>
> "SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
> message news:D55CFF19-39E5-4707-B515-079A7F5EAA4B@.microsoft.com...
>|||My suggestion for ASCII text was really more of a joke than anything
else--suggesting the one format you KNOW will still work. But based on what
you posted, I would just stick with a standard SQL Server backup. You can
always re-install a copy of SQL Server and get the data from there, even if
you switch to Oracle.
Adam Machanic
SQL Server MVP
Author, "Expert SQL Server 2005 Development"
http://www.apress.com/book/bookDisplay.html?bID=10220
"SQL Server newbie" <SQLServernewbie@.discussions.microsoft.com> wrote in
message news:1CA39E77-19FB-47EC-8E22-CBE6348A5647@.microsoft.com...[vbcol=seagreen]
> Adam,
> Thanks for the info. But when I choose ASCII Text as an output format it
> seems to only pickup one table at a time. Is thre any way to specify all
> tables or multiple tables during the Export ?
> Thanks
> "Adam Machanic" wrote:
>

Best eqivalent to a Data Transform Task

Hi
What is the best TSql to a Data Transform Task that just copies data from
one db to another.
Reason for asking, is that I have a DTS package that just does a number of
copies of tables from one database to another (the databases are on
different servers) and occasionally one of these task fails. When it fails,
it doesnt report any errors and the whole execution of the package succeds
without any errors. It doesn't happen very often, so that also makes is
quite difficult to find out why.
As an atempt to find the reason, I'd make these copies/transforms as TSql
jobs, so I'd like to know what would be the best way to do it with Tsql?
Have any of you any good suggestions to this?
Regards
Steen
It could simply be, something like this:
INSERT INTO DB1.dbo.Table1
(Col1, Col2, Col3, Col4, Col5)
SELECT
Col1, Col2, Col3, Col4, Col5
FROM DB2.dbo.Table1
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Steen Persson" <SPE@.REMOVEdatea.dk> wrote in message
news:%2315X2ZnYEHA.1224@.TK2MSFTNGP09.phx.gbl...
Hi
What is the best TSql to a Data Transform Task that just copies data from
one db to another.
Reason for asking, is that I have a DTS package that just does a number of
copies of tables from one database to another (the databases are on
different servers) and occasionally one of these task fails. When it fails,
it doesnt report any errors and the whole execution of the package succeds
without any errors. It doesn't happen very often, so that also makes is
quite difficult to find out why.
As an atempt to find the reason, I'd make these copies/transforms as TSql
jobs, so I'd like to know what would be the best way to do it with Tsql?
Have any of you any good suggestions to this?
Regards
Steen
|||Hi Vyas
Thanks for your input - I was thinking about something like myself, but just
wanted to see if there was a smarter way I didn't knew about.
I seems to have some problems getting this so work in a DTS package though.
I've created a package with a source and destination which are 2 different
servers. I've then made a task that executes a sql statement that should
copy the data from one table in 'server1' to another table in 'server2'.
The code is :
INSERT INTO DATABASE2.dbo.Table2
(Col1, Col2, Col3)
SELECT
Col1, Col2, Col3
FROM [Server1].Database1.dbo.Table1
When I execute this in a Query analyser it works fine, but when I set it
into the data transform task, and parse the query, it gives me an "Invalid
object name DATABASE2.dbo.Table2" error.
I'm building this package on server2, so to some degree I understand that it
can't find/see server1, but why does it then work in a QA.
I've then in the same Transform task tried to do a sp_addlinkedserver -
USE master
GO
EXEC sp_addlinkedserver
'Server2',
N'SQL Server'
GO
When I try this in QA, it tells me that the server alreday exist, and when I
add it to my SQL statement in the Transform Task, it gives we a syntax error
around the "GO" statements.
It might just be me that are doing something stupid, but I just can't see
why it doesn't work. It also puzzles me a little bit why it react diffently
in QA and in the Transform Task, but most likely there are a good reason for
that.
Regards
Steen
"Narayana Vyas Kondreddi" <answer_me@.hotmail.com> skrev i en meddelelse
news:ep9wYdnYEHA.2972@.tk2msftngp13.phx.gbl...
> It could simply be, something like this:
> INSERT INTO DB1.dbo.Table1
> (Col1, Col2, Col3, Col4, Col5)
> SELECT
> Col1, Col2, Col3, Col4, Col5
> FROM DB2.dbo.Table1
> --
> HTH,
> Vyas, MVP (SQL Server)
> http://vyaskn.tripod.com/
> Is .NET important for a database professional?
> http://vyaskn.tripod.com/poll.htm
>
> "Steen Persson" <SPE@.REMOVEdatea.dk> wrote in message
> news:%2315X2ZnYEHA.1224@.TK2MSFTNGP09.phx.gbl...
> Hi
> What is the best TSql to a Data Transform Task that just copies data from
> one db to another.
> Reason for asking, is that I have a DTS package that just does a number of
> copies of tables from one database to another (the databases are on
> different servers) and occasionally one of these task fails. When it
fails,
> it doesnt report any errors and the whole execution of the package succeds
> without any errors. It doesn't happen very often, so that also makes is
> quite difficult to find out why.
> As an atempt to find the reason, I'd make these copies/transforms as TSql
> jobs, so I'd like to know what would be the best way to do it with Tsql?
> Have any of you any good suggestions to this?
> Regards
> Steen
>
>
|||Hi
I have played a little bit with the code, and it now seems like I've got it
working without the Linkedserver option.
I then just have one more simple question - Is there an easy or simple way
to get the INSERT INTO command to take all the colunms in the table without
having to type them all in? Some of the tables I'm working on has quite a
number of columns so I'm already tired just thinking of having to type all
these in by hand.
Regards
Steen
"Steen Persson" <SPE@.REMOVEdatea.dk> skrev i en meddelelse
news:uWYCVQoYEHA.3972@.TK2MSFTNGP12.phx.gbl...
> Hi Vyas
> Thanks for your input - I was thinking about something like myself, but
just
> wanted to see if there was a smarter way I didn't knew about.
> I seems to have some problems getting this so work in a DTS package
though.
> I've created a package with a source and destination which are 2 different
> servers. I've then made a task that executes a sql statement that should
> copy the data from one table in 'server1' to another table in 'server2'.
> The code is :
> INSERT INTO DATABASE2.dbo.Table2
> (Col1, Col2, Col3)
> SELECT
> Col1, Col2, Col3
> FROM [Server1].Database1.dbo.Table1
> When I execute this in a Query analyser it works fine, but when I set it
> into the data transform task, and parse the query, it gives me an "Invalid
> object name DATABASE2.dbo.Table2" error.
> I'm building this package on server2, so to some degree I understand that
it
> can't find/see server1, but why does it then work in a QA.
> I've then in the same Transform task tried to do a sp_addlinkedserver -
> USE master
> GO
> EXEC sp_addlinkedserver
> 'Server2',
> N'SQL Server'
> GO
> When I try this in QA, it tells me that the server alreday exist, and when
I
> add it to my SQL statement in the Transform Task, it gives we a syntax
error
> around the "GO" statements.
> It might just be me that are doing something stupid, but I just can't see
> why it doesn't work. It also puzzles me a little bit why it react
diffently
> in QA and in the Transform Task, but most likely there are a good reason
for[vbcol=seagreen]
> that.
> Regards
> Steen
>
> "Narayana Vyas Kondreddi" <answer_me@.hotmail.com> skrev i en meddelelse
> news:ep9wYdnYEHA.2972@.tk2msftngp13.phx.gbl...
from[vbcol=seagreen]
of[vbcol=seagreen]
> fails,
succeds[vbcol=seagreen]
TSql
>