Showing posts with label working. Show all posts
Showing posts with label working. Show all posts

Sunday, March 25, 2012

Best Practice-working with SQL Express Database

Hi,

I am new to working with Database using .net C#. i will really appreciate if someone can point me to a link where i can find best practices of working with sql express database with my c# web service applications.

preferably i would like to have answers for:

- to have one db or to have one for testing and one real db?

- db security

- use windows authentication or db user authentication

etc.

Best practices for SqlExpress will depend somewhat on your environment/requirements. Since you are developing a web service, I suggest asking the question in one of the asp.net forums (forums.asp.net).

SqlExpress has it's own forum here on the MSDN forums. I'll move the thread over there for more general info on Express best practices.sql

Thursday, March 22, 2012

Best Practices Question for Outputting

Hey guys,
Little bit of a newbie question here...I have a database with about 20or so tables in a relational model. I am now working on an outputscheme and had a quick question regarding best practices foroutputting. Would it be best to
1) Set up a view that basically joins all of these tables together, then bind a DataSet/DataTable to it and output as needed?
2) Setup individual views for each table and run through them?
Thanks for the help!
e...
I've never liked creating do-everything views. You'll never get the same performance as you would by just creating individual stored procedures which join the tables you need to get the specific fields and records you need to fulfill each type of query or scenario you have. Unless you have a pretty simple site that doesn't do more than a couple very similar things, it's a lot of overhead that's not needed. Your other question: Why set up a view on one table? Unless you're doing a lot of calculated fields in the view that are derived from underlying fields in the table, that's a waste. Not knowing anything really about your situation, my stock advice is to create a stored procedure for every type of query you'll need. Add parameters as needed, but each proc should fulfill a specific need. Don't try to make a proc too general. They tend to get bigger and more confusing over time when they try to do too many different things.|||One very good thing about views is it reduces redundancies in yourprocedures. I'd personally make a few views of the most commontypes of joins you'd make. I've seen this as a problem with manydatabase driven sites and applications where one table change requiresyou to alter 30 stored procedures, and code for multiple pages. Alot of minor changes can remain minor if you consolidate alittle. It's extremely funny though when you have issues where afield name is spelled ammount. :)

best practices for rebuilding full text search

Hi
We are working with a sql server 2000 using full text search, which works
well so far. However, we are not sure which is the best way to ensure the
full text search is always working properly. As changes to the database
should be immediatly reflected in a search query, we have change tracking
enabled. But should we now rebuild the fulltext catalogue from time to time.
We are thinking about creating a job that runs every two months that
completly rebuilds the full text index.( exec sp_fulltext_catalog
N'OurFullTextCataloge', N'start_full' )
Would you suggest to do this or is having change-tracking enabled enough?
Thanks for any answer.
Greetings
Daniel
Having change tracking should be enough.
There should be no reason for you to rebuild other than adding or removing a
noise word list.
"Daniel Knpfel" <d.kn@.iph.ch> wrote in message
news:eXcSOhLjHHA.4772@.TK2MSFTNGP05.phx.gbl...
> Hi
> We are working with a sql server 2000 using full text search, which works
> well so far. However, we are not sure which is the best way to ensure the
> full text search is always working properly. As changes to the database
> should be immediatly reflected in a search query, we have change tracking
> enabled. But should we now rebuild the fulltext catalogue from time to
> time. We are thinking about creating a job that runs every two months that
> completly rebuilds the full text index.( exec sp_fulltext_catalog
> N'OurFullTextCataloge', N'start_full' )
> Would you suggest to do this or is having change-tracking enabled enough?
> Thanks for any answer.
> Greetings
> Daniel
>
>
sql

Tuesday, March 20, 2012

best practices for internationalization on sql

i'm working on a international project that we are working with about 70 different countries and locales on one database. we store our data in nchar, nvarchar, ntext type fields to overcome different language characters, but when we tried to sort data for example for danish sql server 2000 sorts it by binary sort so the result set is
a
?
b
c
d
......
but when we sort this data by
"select column1 from table where column2='true' order by column1 collate Danish_Norwegian_CS_AS "
the result set will be like
......
v
w
x
y
z
?
so we want the second one in other words the alphabethic sort order,
we can get the same result set by .net side by defining cultureinfo to danish and bind the data into a dataset and sort the data in a dataview. I want to learn that which way is best on sql side or on .net side.
If your answer will be sql side how can i recieve the client collate to my stored procedur.
i wrote a procedur like this
ALTER procedure dbo.dt_getproductcollation_test
@.collation nvarchar(50)
AS
DECLARE @.cmd nvarchar(3000)
set nocount on
SET @.cmd='select column1 from tcollate where [column2]=1 ORDER BY COLUMN1 COLLATE ' + @.collation
Execute(@.cmd)

but by this way i gues my procedure wont be compile because of execute statement, and also this will reduce my procedure performans.
is anybody has a best practises about this issue.
Thank You.

If it is easy to implement and the result set is not too big, I think it would be better to sort it on middle tier/client, i.e., on .NET side. This will avoid the complicated logic to do dynamically COLLATE on SQL side, and save some server cycles.

As you said, using dynamic SQL in the stored procedure will cause recompile every time, which is certainly not good. You could build multiple version of the same proc to handle different collations, or build the SQL statements on the client, but neither is a good/convenicent option.

best practices for internationalization on sql

i'm working on a international project that we are working with about 70 different countries and locales on one database. we store our data in nchar, nvarchar, ntext type fields to overcome different language characters, but when we tried to sort data for example for danish sql server 2000 sorts it by binary sort so the result set is
a
?
b
c
d
......
but when we sort this data by
"select column1 from table where column2='true' order by column1 collate Danish_Norwegian_CS_AS "
the result set will be like
......
v
w
x
y
z
?
so we want the second one in other words the alphabethic sort order,
we can get the same result set by .net side by defining cultureinfo to danish and bind the data into a dataset and sort the data in a dataview. I want to learn that which way is best on sql side or on .net side.
If your answer will be sql side how can i recieve the client collate to my stored procedur.
i wrote a procedur like this
ALTER procedure dbo.dt_getproductcollation_test
@.collation nvarchar(50)
AS
DECLARE @.cmd nvarchar(3000)
set nocount on
SET @.cmd='select column1 from tcollate where [column2]=1 ORDER BY COLUMN1 COLLATE ' + @.collation
Execute(@.cmd)

but by this way i gues my procedure wont be compile because of execute statement, and also this will reduce my procedure performans.
is anybody has a best practises about this issue.
Thank You.

If it is easy to implement and the result set is not too big, I think it would be better to sort it on middle tier/client, i.e., on .NET side. This will avoid the complicated logic to do dynamically COLLATE on SQL side, and save some server cycles.

As you said, using dynamic SQL in the stored procedure will cause recompile every time, which is certainly not good. You could build multiple version of the same proc to handle different collations, or build the SQL statements on the client, but neither is a good/convenicent option.

Best practices for backup and restore of large replicated databases

Hello,
I am working on a project where we have a large database that is replicated.
My task is to create a backup/restore plan for this project.
I know the basics for backup/restore, but are looking for best practices and
experiences from you who have done this already on a large scale.
Both publisher and subscribers are allowed to do updates. Restore time at
subscribers have to be short and therefore I am looking for a solution where
both publisher and subscribers are backup up.
Thanks in advance for any pointers to white papers, books, emails, etc.
Best regards,
Vemund
Hi Vemund,
Thanks for your post and I would like to take ownership of this thread and
help you with this issue.
Based on my scope, I understood that you would like to find the best
practise about large scale replicated database backup and restore strategy.
Have I understood you? Correct me if I was wrong.
First of all, please understood that it is hard to say what is the best
practice for your porject as I am not sure how much important is the data
for you and how much data lose could be acceptable. You should discussed
this with your users and find the best strategy for your project.
Generally speaking, here are some general recommendations on database
disaster recovery strategy:
1. Make regular backups that reflect your backup strategy. Ask yourselves
the question - how much data can I afford to lose? (1 min, 1 hour, 1 day
worth). Then ensure that the frequency of the backups reflects the answer
to that question.
2. Make backup to a different media from where the database file reside.
That way you protect against a central point of failure. You can consider
backing up to tape or to another disk (share). Consider moving the backed
up data off-site to another location to protect against disasters like
fires, floods, hurricanes, tornadoes, etc.
3. Test your backups on a regular basis by ensuring they are restore-able
on another system. Then run DBCC CHECKDB on those to ensure the original
database or the backup are not damaged.
4. Keep in mind that the data stored in your database(s) may be one of the
most valuable assets your organization has!
In addition, please review the following information in SQL Books On-Line
"Designing a Backup and Restore Strategy"
Here are some precautions a database administrator should take to ensure
the safety of the data:
1. It is always a good idea to ensure that your backup strategy is
sufficient to recover from a catastrophic failure. Offsite storage and
other precautions are appropriate.
2. Test the database restore operation in a secondary or test database on a
frequent basis.
3. Ensure that any caching devices can handle all failure situations (power
outage, bad sectors, bad drives, system outage, lockups, power spike, and
so forth).
4. Ensure that your caching device:
-- Has integrated battery backup.
-- Can reissue writes on power up.
-- Can be fully disabled if necessary.
-- Handles bad sector re-mapping realtime.
5. Enable torn page detection; it has little performance impact.
6. Configure RAID drives allowing for a hot swap of a bad disk drive, if
possible.
7. Use newer caching controllers that allow addition of more disk space
without
restarting the OS. This can be an ideal solution.
Check the following which covers some of this in detail:
SQL Server capacity and storage guide:
http://www.microsoft.com/technet/pro...n/sqlops6.mspx
Windows backup guide:
http://www.microsoft.com/technet/pro...aintain/backup
rest/br04.mspx
Moreover, here is one more WebCast for you, which will discuss Microsoft
SQL Server database files, the different kinds of database backup options
that are available in SQL Server 2000, and some of the factors to consider
when you design a backup strategy. It will talk about best practices for
backing up and restoring database files. It will review some case studies
and talk about how to troubleshoot problems with backup and restore
operations.
Support WebCast: Microsoft SQL Server 2000: Understanding Backup and Restore
http://support.microsoft.com/?id=329833
Thank you for your patience and corperation. If you have any questions or
concerns, don't hesitate to let me know. We are here to be of assistance!
Sincerely yours,
Mingqing Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!
|||Hi Vemund,
I am just checking on your progress regarding the information that was sent
you! I wonder whether my suggestion is useful or you would like to receive
more information on this.
If you encounter any difficulty, please do not hesitate to let me know.
Please post here and let me know the status of your issue. Looking forward
to hearing from you soon
Sincerely yours,
Mingqing Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!
sql

Monday, March 19, 2012

Best Practice/recommendation dev data maint plans

We are working on converting to SQL 2005 database. During the conversion we are having to rewrite a lot of code and doing a lot of intital testing and development on development data. This is causing our transaction logs to really big. I have created a maint plan that runs nightly that does a back up of database and tran log but throughtout the day the tran logs are getting really big and eating up a ton of disk space. Does anyone have suggestions on what sort of maint plan I can setup to run on my developement data where as at this point I am not concened about being able to roll back the database just keep is small as possible and "healthly"

All ideas are appreciated

Thanks

Chris

Hi,

If your database has recovery model set to FULL you can schedule to take a T-Log backup on half/hourly to keep it in shape.

BTW are you doing Re-Indexing / Bulk Insertion-Updation !!!

Hemantgiri S. Goswami

|||

Would you recommend running the hourly t-log backups by creating a maint plan?

I have a maint plan running nightly that is perfoming reindexing, update stats, and shrink the db. We are also converting in data via text files and bulk inserting but when we do this we are shutting off as much of the logging as possible.

Would you happen to know of any articles, white papers or anything like that I could read up on?

Thanks for your time.

Chris

|||

Hi,

As you said that you have a maint plann running nightly which does ReIndexing... Re Indexing actually keep your T-Log file growing, what is a plan you have for shrinking!!! Refer http://hemantgirisgoswami.blogspot.com/2006/03/cause-for-t-log-become-full-and-how-to.html

Regards

Hemantgiri S. Goswami

|||Thanks for the suggestions. The articles are very helpfull.

Best Practice Question (clustering and remote locations access)

We have a SQL server here in Portland, we have a new office in Philadelphia
and we need to have both locations working off the same live data.
Regardless we will be clustering the SQL server here, but if we have a fibre
line (either will be DS3 45MB or OC3 155MB) connecting the two sites what
would be the best thing to do, have just the cluster here and phili works
off of that, or add another SQL server to the phili office and have it sync
with the ones here?
Just wondering pros and cons of both, what would be faster & more efficient
etc and wanted to get the opinions of some folks here. I'm not real
concerned about redundancy or uptime at this point, we will tackle that once
we get this first question decided on.
Thanks!
Scott
Hi
If your comms are reliable and have the capacity and response times then
maintaining the single cluster will be the simplest solutions.
Having an internet/intranet based application would be an alternative,
similar a citrix type solution may be another option to consider.
John
"Scott McDonald" wrote:

> We have a SQL server here in Portland, we have a new office in Philadelphia
> and we need to have both locations working off the same live data.
> Regardless we will be clustering the SQL server here, but if we have a fibre
> line (either will be DS3 45MB or OC3 155MB) connecting the two sites what
> would be the best thing to do, have just the cluster here and phili works
> off of that, or add another SQL server to the phili office and have it sync
> with the ones here?
> Just wondering pros and cons of both, what would be faster & more efficient
> etc and wanted to get the opinions of some folks here. I'm not real
> concerned about redundancy or uptime at this point, we will tackle that once
> we get this first question decided on.
> Thanks!
> Scott
>
>
|||this might be a situation in which you would consider Merge Replication.
Not sure, it's a tough question.
Greg Jackson
PDX, Oregon

Best Practice Question (clustering and remote locations access)

We have a SQL server here in Portland, we have a new office in Philadelphia
and we need to have both locations working off the same live data.
Regardless we will be clustering the SQL server here, but if we have a fibre
line (either will be DS3 45MB or OC3 155MB) connecting the two sites what
would be the best thing to do, have just the cluster here and phili works
off of that, or add another SQL server to the phili office and have it sync
with the ones here?
Just wondering pros and cons of both, what would be faster & more efficient
etc and wanted to get the opinions of some folks here. I'm not real
concerned about redundancy or uptime at this point, we will tackle that once
we get this first question decided on.
Thanks!
Scott
Hi
If your comms are reliable and have the capacity and response times then
maintaining the single cluster will be the simplest solutions.
Having an internet/intranet based application would be an alternative,
similar a citrix type solution may be another option to consider.
John
"Scott McDonald" wrote:

> We have a SQL server here in Portland, we have a new office in Philadelphia
> and we need to have both locations working off the same live data.
> Regardless we will be clustering the SQL server here, but if we have a fibre
> line (either will be DS3 45MB or OC3 155MB) connecting the two sites what
> would be the best thing to do, have just the cluster here and phili works
> off of that, or add another SQL server to the phili office and have it sync
> with the ones here?
> Just wondering pros and cons of both, what would be faster & more efficient
> etc and wanted to get the opinions of some folks here. I'm not real
> concerned about redundancy or uptime at this point, we will tackle that once
> we get this first question decided on.
> Thanks!
> Scott
>
>
|||this might be a situation in which you would consider Merge Replication.
Not sure, it's a tough question.
Greg Jackson
PDX, Oregon

Best Practice Question (clustering and remote locations access)

We have a SQL server here in Portland, we have a new office in Philadelphia
and we need to have both locations working off the same live data.
Regardless we will be clustering the SQL server here, but if we have a fibre
line (either will be DS3 45MB or OC3 155MB) connecting the two sites what
would be the best thing to do, have just the cluster here and phili works
off of that, or add another SQL server to the phili office and have it sync
with the ones here?
Just wondering pros and cons of both, what would be faster & more efficient
etc and wanted to get the opinions of some folks here. I'm not real
concerned about redundancy or uptime at this point, we will tackle that once
we get this first question decided on.
Thanks!
Scott
Hi
If your comms are reliable and have the capacity and response times then
maintaining the single cluster will be the simplest solutions.
Having an internet/intranet based application would be an alternative,
similar a citrix type solution may be another option to consider.
John
"Scott McDonald" wrote:

> We have a SQL server here in Portland, we have a new office in Philadelphia
> and we need to have both locations working off the same live data.
> Regardless we will be clustering the SQL server here, but if we have a fibre
> line (either will be DS3 45MB or OC3 155MB) connecting the two sites what
> would be the best thing to do, have just the cluster here and phili works
> off of that, or add another SQL server to the phili office and have it sync
> with the ones here?
> Just wondering pros and cons of both, what would be faster & more efficient
> etc and wanted to get the opinions of some folks here. I'm not real
> concerned about redundancy or uptime at this point, we will tackle that once
> we get this first question decided on.
> Thanks!
> Scott
>
>
|||this might be a situation in which you would consider Merge Replication.
Not sure, it's a tough question.
Greg Jackson
PDX, Oregon

Best Practice Question (clustering and remote locations access)

We have a SQL server here in Portland, we have a new office in Philadelphia
and we need to have both locations working off the same live data.
Regardless we will be clustering the SQL server here, but if we have a fibre
line (either will be DS3 45MB or OC3 155MB) connecting the two sites what
would be the best thing to do, have just the cluster here and phili works
off of that, or add another SQL server to the phili office and have it sync
with the ones here?
Just wondering pros and cons of both, what would be faster & more efficient
etc and wanted to get the opinions of some folks here. I'm not real
concerned about redundancy or uptime at this point, we will tackle that once
we get this first question decided on.
Thanks!
ScottHi
If your comms are reliable and have the capacity and response times then
maintaining the single cluster will be the simplest solutions.
Having an internet/intranet based application would be an alternative,
similar a citrix type solution may be another option to consider.
John
"Scott McDonald" wrote:

> We have a SQL server here in Portland, we have a new office in Philadelphi
a
> and we need to have both locations working off the same live data.
> Regardless we will be clustering the SQL server here, but if we have a fib
re
> line (either will be DS3 45MB or OC3 155MB) connecting the two sites what
> would be the best thing to do, have just the cluster here and phili works
> off of that, or add another SQL server to the phili office and have it syn
c
> with the ones here?
> Just wondering pros and cons of both, what would be faster & more efficien
t
> etc and wanted to get the opinions of some folks here. I'm not real
> concerned about redundancy or uptime at this point, we will tackle that on
ce
> we get this first question decided on.
> Thanks!
> Scott
>
>|||this might be a situation in which you would consider Merge Replication.
Not sure, it's a tough question.
Greg Jackson
PDX, Oregon

Best Practice Question (clustering and remote locations access)

We have a SQL server here in Portland, we have a new office in Philadelphia
and we need to have both locations working off the same live data.
Regardless we will be clustering the SQL server here, but if we have a fibre
line (either will be DS3 45MB or OC3 155MB) connecting the two sites what
would be the best thing to do, have just the cluster here and phili works
off of that, or add another SQL server to the phili office and have it sync
with the ones here?
Just wondering pros and cons of both, what would be faster & more efficient
etc and wanted to get the opinions of some folks here. I'm not real
concerned about redundancy or uptime at this point, we will tackle that once
we get this first question decided on.
Thanks!
Scottthis might be a situation in which you would consider Merge Replication.
Not sure, it's a tough question.
Greg Jackson
PDX, Oregon

Best Practice for storing Blog text

Hi..

I am working on creating a blog for my site(not yet ready). Here's the idea --

I will have an 'Add a blog' page where it has a textarea and a submit button. I will write the blog in this textarea and hit submit. It will be added to the database and I will retrieve it in the actual blog page and bind it to a datalist. The blogtext even has the <br><font> tags etc... but then i'll hv to reframe the keywords with [keyword] I guess..

Is this the correct practice? Is storing blogtext in a SQL db a good idea..? Is there a better way of doing the same?

Thanks.

Sure, storing the blog data in SQL Server database is a usual thing, like the blog module in DNN. But anyway, beside of doing this funny thing, perhaps there is already (I assume so) a blog module out there for free with the source code which can be downloaded (with source code) and can be customized by yourself.

HTH, Jens Suessmeyer.

|||

Sure, storing the blog data in SQL Server database is a usual thing, like the blog module in DNN. But anyway, beside of doing this funny thing, perhaps there is already (I assume so) a blog module out there for free with the source code which can be downloaded (with source code) and can be customized by yourself.

HTH, Jens Suessmeyer.

|||

Thank you for the info. I have googled around a bit but found nothing.. hmmm.. if you happen to have a link, then lemme know.

Thanks.

|||

Did you have a look in www.sourceforge.net ? Once searched I found some entries, perhaps dasBlog: http://swik.net/dasBlog/SourceForge.net%3A+Project+File+Releases%3A+newtelligence+dasBlog+Community+Edition/dasblogce+dasBlog+1.8.5223.2+released+%28Mon%2C+07+Nov+2005+14%3A53%3A25+GMT%29/thr

HTH, jens Suessmeyer.

Wednesday, March 7, 2012

Best method or is it even worth the effort?

Hi,

I have a question regarding a project I am working on and I'm not sure what the best way to do it may be, I am new to SSIS and would like to gain some exposure using it, but don't know if this project is overkill or even possible using SSIS.

*** PROBLEM ****

Basically I have a large number of flat files each file contains different columns and data. What I want to do is import all of the data from the flat files into corresponding tables in SQL Server. The tables in SQL server are already defined with data structure to match each file. The first 10 characters of the flat file are also the same name as the SQL table. So it seems easy enough to just import flat file to table one by one, but I cannot figure out how to match the flat file structure to each table without setting up a task for each table (there are roughly 50-60 tables). Another reason I want to do it with SSIS is these flat files have to be imported on a weekly basis.

**********************

Is it in my best interest to just create a stored procedure that basically loops through the files and imports the data? Or do you think using SSIS is the best solution? I have looked at SSIS a bit and thought maybe I could use the bulkinsert task, but it appears it cannot dynamically change due to column specs. I figured out how to loop through the flat files using the foreach task, but I could only get it to import into one SQL table. I also started looking at the dataflow task and thought about creating a script task, but I'm not sure if that would work before I spend a lot of time trying to figure that out.

Any suggestions or thoughts are appreciated.

Thank you

Are you saying that there are many record layouts/formats in the same file? Or one file has only one format, but there are multiple formats and multiple files with those formats?

Either way, if you have 50 file formats, seems like you're going to have to configure 50 flat file source connections. There may be a good way with the script task, but that's not my strength. But I would choose SSIS over other method - at the least it sounds like you should look into the conditional split transformation, and variables. Let us know about the above question.

|||

To implement this without scripting (I'm under the impression that your 50 files / tables have different formats), you'd need to create 50 data flows.

It certainly could be implemented via a script, but I'm not sure what benefit you'd get from writing the script in SSIS versus just creating a VB or C# application.

|||

Not sure what it is you are doing, but it sounds EXACTLY like what I have recently done - except with 45 files/tables. I started to go down the route of individual DFs, but instead created individual packages for each of the files I have to process. Either way will work. Basically, we have a parent package that kicks off the children packages to process each file. This is done through iterating through our known files for each organization that will submit these 45 files.

This is working as designed and have not ran into any troubles. We are still working on getting some better error logging now. Please feel free to post if you have any questions.

Regards

|||

I was hoping to create something a little less cumbersome, although I think that will work and may be my last resort. I have something right now that seems to work somewhat I need to do further testing and development to see if it will meet my needs.

What I have done is create a simple foreach loop and inside the foreach loop I have a script task that takes the return value from the foreach loop which is the file name, and parses the string into values that I can pass into the bulk insert task.

In my script I set the Dts.Connections.Item("BulkFile").ConnectionString to the filename variable from the foreach loop and then I parse the string to get the name of my table and set a user defined variable I just called "TableName" with the parsed string value. I then open the Bulk Insert Task and set the DestinationTableName expression to my user defined variable "TableName".

My initial findings appear to work well, but so far I have only imported six of the table dynamically I still have 50+ to go. I have to create some stored procedures to clean out the tables before the bulk insert due to primary key constraints etc... but basically once a week I am flushing all of the data in these tables and will be replacing them with the data from the flat files.

Saturday, February 25, 2012

Best indexes to use

Hello,
I am currently working on some database tables that are suffering from
performance issues and would like some advice on which indexes are best
to use.
There exists a composite key on three fields out of 7 in the table and
these three fields are the ones normally used when searching the table
as well.
1) As most queries return only single answers, would it be better to
add three seperate indexes or a composite index?
2) Also, from my research, most people indicate using composite index
would be better - is this the case?
3) If I use a composite index, would it be used when searching using
only one or two of the composite fields?
Thanks in advance,
jjumblesale wrote:
> Hello,
> I am currently working on some database tables that are suffering from
> performance issues and would like some advice on which indexes are best
> to use.
> There exists a composite key on three fields out of 7 in the table and
> these three fields are the ones normally used when searching the table
> as well.
> 1) As most queries return only single answers, would it be better to
> add three seperate indexes or a composite index?
> 2) Also, from my research, most people indicate using composite index
> would be better - is this the case?
> 3) If I use a composite index, would it be used when searching using
> only one or two of the composite fields?
> Thanks in advance,
> j
>
You can look at the execution plan to see what indexes are being used by
your query. I'll try to give an overly simplified explanation:
Suppose I have a table with 7 columns, Col1 thru Col7.
I have an index named Index1 using Col1, Col2, Col3 as a composite key.
Running the following queries (data volumes/optimizer whims may alter
these behaviors):
-- This will do an index seek using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
Does that make sense? Now let's add another index, Index2, to the
table, using Col2, Col4 as a key:
-- This will now do an index seek using Index2
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index2, with a bookmark lookup to
get the value of Col1, not Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
These are ridiculously simple examples, and when run against real data
volumes, the optimizer may choose a different course of action,
depending on statistics, distribution of values, etc...
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Cheers for your reply - that's great in regards to the composite index
issue i'm having, but in these kinds of scenarios, would I be right in
thinking that a non-clustered index covering the three columns would be
of more use than a clustered index because I am using queries that
bring back small numbers (usually one) result?
j
> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the
> table, using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to
> get the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action,
> depending on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||jumblesale wrote:
> Cheers for your reply - that's great in regards to the composite index
> issue i'm having, but in these kinds of scenarios, would I be right in
> thinking that a non-clustered index covering the three columns would be
> of more use than a clustered index because I am using queries that
> bring back small numbers (usually one) result?
>
I'm going to have to used the canned response of "it depends" on this
one. *GENERALLY* clustered indexes perform best on range scans, BUT,
they can also be used by non-clustered indexes to avoid doing bookmark
lookups. Your best bet is to experiment a little, compare the execution
plan of your query when different indexes are available, see what
performs best given your data.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Tracy's given some nice samples here. I'll just chime in with a little extra
which might help you understand.
Ideally, you want the columns that are being filtered in the query + those
being returned in the select list in the index. If you have many different
queries that run against the table, you might need a few different indexes
to accommodate their various requirements. Don't be afraid of adding more
than one column to an index & also don't be afraid of adding multiple
indexes to a table.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C241.8090806@.realsqlguy.com...
> jumblesale wrote:
>> Hello,
>> I am currently working on some database tables that are suffering from
>> performance issues and would like some advice on which indexes are best
>> to use.
>> There exists a composite key on three fields out of 7 in the table and
>> these three fields are the ones normally used when searching the table
>> as well.
>> 1) As most queries return only single answers, would it be better to
>> add three seperate indexes or a composite index?
>> 2) Also, from my research, most people indicate using composite index
>> would be better - is this the case?
>> 3) If I use a composite index, would it be used when searching using
>> only one or two of the composite fields?
>> Thanks in advance,
>> j
> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the table,
> using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to get
> the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action, depending
> on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||ha! sorry for backing you into a corner there - good advice though,
i'll have to get working on the indexes and check the execution plans -
just wanted a bit of advise before I start playing about on the server.
Thanks for your ideas,
j
Tracy McKibben wrote:
> I'm going to have to used the canned response of "it depends" on this
> one. *GENERALLY* clustered indexes perform best on range scans, BUT,
> they can also be used by non-clustered indexes to avoid doing bookmark
> lookups. Your best bet is to experiment a little, compare the execution
> plan of your query when different indexes are available, see what
> performs best given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Hi Tracy
Re >*GENERALLY* clustered indexes perform best on range scans<
I'd love to hear your thoughts on this blog I did yesterday.
http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C596.30004@.realsqlguy.com...
> jumblesale wrote:
>> Cheers for your reply - that's great in regards to the composite index
>> issue i'm having, but in these kinds of scenarios, would I be right in
>> thinking that a non-clustered index covering the three columns would be
>> of more use than a clustered index because I am using queries that
>> bring back small numbers (usually one) result?
> I'm going to have to used the canned response of "it depends" on this one.
> *GENERALLY* clustered indexes perform best on range scans, BUT, they can
> also be used by non-clustered indexes to avoid doing bookmark lookups.
> Your best bet is to experiment a little, compare the execution plan of
> your query when different indexes are available, see what performs best
> given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Greg Linwood wrote:
> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx
>
I have to admit, I've never actually compared clustered vs.
non-clustered indexes in the detail that you have here. I'm guilty of
simply repeating what I've picked up from other sources. Your analysis
makes perfect sense to me, but I'm curious... Your example assumes
we're returning a subset of columns from every row in the table.
Suppose our table contains a million rows, clustered on a "SessionID"
column. Session ID's increment throughout the day. A typical session
contains 100-150 rows, and I want to return every column from the table
for the five sessions that occurred yesterday. Without having hard
numbers to back me up, it seems like it the clustered index wins in this
case. Or would you not consider this to be a "range" scan?
I think it's going to depend on how wide your query is and what sort of
data you're pulling back - can it be covered with a non-clustered index?
Either way, it's a well written article and it certainly has me thinking...
--
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Hi Greg,
Just a quick question - because I don't have a unique primary key and
require the use of a composite key on the table, would it potentially
cause a performance hit if I use composite NCIX only and have no CIX on
the table?
Cheers for your posts btw - very interesting blog article too - it
confirmed some tests I thought i'd done incorrectly in a sql refresher
course I attended last year!
j
Greg Linwood wrote:
> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
>|||jumblesale wrote:
> ha! sorry for backing you into a corner there - good advice though,
> i'll have to get working on the indexes and check the execution plans -
> just wanted a bit of advise before I start playing about on the server.
>
I spend most of my time sitting in the corner... :-)
There's just no way to give a definitive answer without having intimate
knowledge of the data involved, and being able to work with it
first-hand. Good luck!
Tracy McKibben
MCDBA
http://www.realsqlguy.com

Best indexes to use

Hello,
I am currently working on some database tables that are suffering from
performance issues and would like some advice on which indexes are best
to use.
There exists a composite key on three fields out of 7 in the table and
these three fields are the ones normally used when searching the table
as well.
1) As most queries return only single answers, would it be better to
add three seperate indexes or a composite index?
2) Also, from my research, most people indicate using composite index
would be better - is this the case?
3) If I use a composite index, would it be used when searching using
only one or two of the composite fields?
Thanks in advance,
jjumblesale wrote:
> Hello,
> I am currently working on some database tables that are suffering from
> performance issues and would like some advice on which indexes are best
> to use.
> There exists a composite key on three fields out of 7 in the table and
> these three fields are the ones normally used when searching the table
> as well.
> 1) As most queries return only single answers, would it be better to
> add three seperate indexes or a composite index?
> 2) Also, from my research, most people indicate using composite index
> would be better - is this the case?
> 3) If I use a composite index, would it be used when searching using
> only one or two of the composite fields?
> Thanks in advance,
> j
>
You can look at the execution plan to see what indexes are being used by
your query. I'll try to give an overly simplified explanation:
Suppose I have a table with 7 columns, Col1 thru Col7.
I have an index named Index1 using Col1, Col2, Col3 as a composite key.
Running the following queries (data volumes/optimizer whims may alter
these behaviors):
-- This will do an index seek using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col1 = 'x'
-- This will do an index SCAN using Index1, with a bookmark lookup to
get Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
Does that make sense? Now let's add another index, Index2, to the
table, using Col2, Col4 as a key:
-- This will now do an index seek using Index2
SELECT Col1, Col2
FROM MyTable
WHERE Col2 = 'x'
-- This will do an index seek using Index2, with a bookmark lookup to
get the value of Col1, not Col4
SELECT Col1, Col2, Col4
FROM MyTable
WHERE Col2 = 'x'
These are ridiculously simple examples, and when run against real data
volumes, the optimizer may choose a different course of action,
depending on statistics, distribution of values, etc...
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Cheers for your reply - that's great in regards to the composite index
issue i'm having, but in these kinds of scenarios, would I be right in
thinking that a non-clustered index covering the three columns would be
of more use than a clustered index because I am using queries that
bring back small numbers (usually one) result?
j

> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to
> get Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the
> table, using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to
> get the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action,
> depending on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||jumblesale wrote:
> Cheers for your reply - that's great in regards to the composite index
> issue i'm having, but in these kinds of scenarios, would I be right in
> thinking that a non-clustered index covering the three columns would be
> of more use than a clustered index because I am using queries that
> bring back small numbers (usually one) result?
>
I'm going to have to used the canned response of "it depends" on this
one. *GENERALLY* clustered indexes perform best on range scans, BUT,
they can also be used by non-clustered indexes to avoid doing bookmark
lookups. Your best bet is to experiment a little, compare the execution
plan of your query when different indexes are available, see what
performs best given your data.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Tracy's given some nice samples here. I'll just chime in with a little extra
which might help you understand.
Ideally, you want the columns that are being filtered in the query + those
being returned in the select list in the index. If you have many different
queries that run against the table, you might need a few different indexes
to accommodate their various requirements. Don't be afraid of adding more
than one column to an index & also don't be afraid of adding multiple
indexes to a table.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C241.8090806@.realsqlguy.com...
> jumblesale wrote:
> You can look at the execution plan to see what indexes are being used by
> your query. I'll try to give an overly simplified explanation:
> Suppose I have a table with 7 columns, Col1 thru Col7.
> I have an index named Index1 using Col1, Col2, Col3 as a composite key.
> Running the following queries (data volumes/optimizer whims may alter
> these behaviors):
> -- This will do an index seek using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col1 = 'x'
> -- This will do an index SCAN using Index1, with a bookmark lookup to get
> Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> Does that make sense? Now let's add another index, Index2, to the table,
> using Col2, Col4 as a key:
> -- This will now do an index seek using Index2
> SELECT Col1, Col2
> FROM MyTable
> WHERE Col2 = 'x'
> -- This will do an index seek using Index2, with a bookmark lookup to get
> the value of Col1, not Col4
> SELECT Col1, Col2, Col4
> FROM MyTable
> WHERE Col2 = 'x'
> These are ridiculously simple examples, and when run against real data
> volumes, the optimizer may choose a different course of action, depending
> on statistics, distribution of values, etc...
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Hi Tracy
Re >*GENERALLY* clustered indexes perform best on range scans<
I'd love to hear your thoughts on this blog I did yesterday.
http://blogs.sqlserver.org.au/blogs.../09/11/365.aspx
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"Tracy McKibben" <tracy@.realsqlguy.com> wrote in message
news:4506C596.30004@.realsqlguy.com...
> jumblesale wrote:
> I'm going to have to used the canned response of "it depends" on this one.
> *GENERALLY* clustered indexes perform best on range scans, BUT, they can
> also be used by non-clustered indexes to avoid doing bookmark lookups.
> Your best bet is to experiment a little, compare the execution plan of
> your query when different indexes are available, see what performs best
> given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||ha! sorry for backing you into a corner there - good advice though,
i'll have to get working on the indexes and check the execution plans -
just wanted a bit of advise before I start playing about on the server.
Thanks for your ideas,
j
Tracy McKibben wrote:

> I'm going to have to used the canned response of "it depends" on this
> one. *GENERALLY* clustered indexes perform best on range scans, BUT,
> they can also be used by non-clustered indexes to avoid doing bookmark
> lookups. Your best bet is to experiment a little, compare the execution
> plan of your query when different indexes are available, see what
> performs best given your data.
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||Greg Linwood wrote:
> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> [url]http://blogs.sqlserver.org.au/blogs/greg_linwood/archive/2006/09/11/365.aspx[/ur
l]
>
I have to admit, I've never actually compared clustered vs.
non-clustered indexes in the detail that you have here. I'm guilty of
simply repeating what I've picked up from other sources. Your analysis
makes perfect sense to me, but I'm curious... Your example assumes
we're returning a subset of columns from every row in the table.
Suppose our table contains a million rows, clustered on a "SessionID"
column. Session ID's increment throughout the day. A typical session
contains 100-150 rows, and I want to return every column from the table
for the five sessions that occurred yesterday. Without having hard
numbers to back me up, it seems like it the clustered index wins in this
case. Or would you not consider this to be a "range" scan?
I think it's going to depend on how wide your query is and what sort of
data you're pulling back - can it be covered with a non-clustered index?
Either way, it's a well written article and it certainly has me thinking...
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Hi Greg,
Just a quick question - because I don't have a unique primary key and
require the use of a composite key on the table, would it potentially
cause a performance hit if I use composite NCIX only and have no CIX on
the table?
Cheers for your posts btw - very interesting blog article too - it
confirmed some tests I thought i'd done incorrectly in a sql refresher
course I attended last year!
j
Greg Linwood wrote:

> Hi Tracy
> Re >*GENERALLY* clustered indexes perform best on range scans<
> I'd love to hear your thoughts on this blog I did yesterday.
> http://blogs.sqlserver.org.au/blogs...gs/greg_linwood
>|||jumblesale wrote:
> ha! sorry for backing you into a corner there - good advice though,
> i'll have to get working on the indexes and check the execution plans -
> just wanted a bit of advise before I start playing about on the server.
>
I spend most of my time sitting in the corner... :-)
There's just no way to give a definitive answer without having intimate
knowledge of the data involved, and being able to work with it
first-hand. Good luck!
Tracy McKibben
MCDBA
http://www.realsqlguy.com

Best Enterprise Manager clone/wannabe?

Hi,
When working with environments that use the MSDE, we usually plug a laptop
into their network that has Enterprise Manager on it, and work that way.
Is there a good possibly third partry product that is not too expensive and
can be loaded on their machine so we don't have to do this? That way I could
go in via the Net and work with out going to their location.
I know there are a couple out there, I just haven't been able to find anyone
who has experience with them.
Thanks,
Steve
Why can't you load EM (Client Tools) only?
http://sqlservercode.blogspot.com/
"SteveInBeloit" wrote:

> Hi,
> When working with environments that use the MSDE, we usually plug a laptop
> into their network that has Enterprise Manager on it, and work that way.
> Is there a good possibly third partry product that is not too expensive and
> can be loaded on their machine so we don't have to do this? That way I could
> go in via the Net and work with out going to their location.
> I know there are a couple out there, I just haven't been able to find anyone
> who has experience with them.
> Thanks,
> Steve
|||My ignorance maybe. . . I was under the impression that legally you could not.
Steve
"SQL" wrote:
[vbcol=seagreen]
> Why can't you load EM (Client Tools) only?
> http://sqlservercode.blogspot.com/
> "SteveInBeloit" wrote:
|||Legally you may not be allowed to so you aren't being
ignorant. Check the following articles for an explanation
and tools to manage MSDE- there are some free ones listed.
Of the third party tools, DbaMgr is good.
If you are just looking for a query tool, Toad for SQL
Server works well.
-Sue
On Thu, 6 Oct 2005 13:59:06 -0700, "SteveInBeloit"
<SteveInBeloit@.discussions.microsoft.com> wrote:
[vbcol=seagreen]
>My ignorance maybe. . . I was under the impression that legally you could not.
>Steve
>
>"SQL" wrote:
|||SteveInBeloit wrote:

> Is there a good possibly third partry product that is not too expensive and
http://www.sqlmanager.net/
|||Sorry...forgot the link:
http://www.aspfaq.com/show.asp?id=2442
-Sue
On Thu, 06 Oct 2005 22:44:08 -0600, Sue Hoegemeier
<Sue_H@.nomail.please> wrote:
[vbcol=seagreen]
>Legally you may not be allowed to so you aren't being
>ignorant. Check the following articles for an explanation
>and tools to manage MSDE- there are some free ones listed.
>Of the third party tools, DbaMgr is good.
>If you are just looking for a query tool, Toad for SQL
>Server works well.
>-Sue
>On Thu, 6 Oct 2005 13:59:06 -0700, "SteveInBeloit"
><SteveInBeloit@.discussions.microsoft.com> wrote:
|||SteveInBeloit wrote:
> Hi,
> When working with environments that use the MSDE, we usually plug a laptop
> into their network that has Enterprise Manager on it, and work that way.
> Is there a good possibly third partry product that is not too expensive and
> can be loaded on their machine so we don't have to do this? That way I could
> go in via the Net and work with out going to their location.
Buy SQL Server Developer Edition for $50 and get it all.
Steve Troxell
|||Have you tried, EMS sql manager lite
http://www.hotscripts.com/Detailed/47270.html
"SteveInBeloit" <SteveInBeloit@.discussions.microsoft.com> wrote in message
news:196C5DC7-6942-4EF3-A057-1126BDE7BC20@.microsoft.com...
> Hi,
> When working with environments that use the MSDE, we usually plug a laptop
> into their network that has Enterprise Manager on it, and work that way.
> Is there a good possibly third partry product that is not too expensive
> and
> can be loaded on their machine so we don't have to do this? That way I
> could
> go in via the Net and work with out going to their location.
> I know there are a couple out there, I just haven't been able to find
> anyone
> who has experience with them.
> Thanks,
> Steve

Friday, February 24, 2012

Best approach

I have what I feel like is a simple package I am working to create. I am teaching myself SSIS as I go along.

Source server SQL 2000 database allows NULL values in columns.

Destination Server also SQL 2000 but the database required a value in each column.

So I do a basic source select what I want. I next need to read the values and determine if null then insert a space, do some column matching and insert them into the destination sever.

I believe I should use a Derived Column and an expression ISNULL to accomplish what I want.

Maybe there is a better way. Suggestion and comment appreciated.

Ryan

Try using the T-SQL ISNULL() function in your initial query, replacing any null values with an empty string or whatever...

ISNULL(myField,'')

Monday, February 13, 2012

beginner student PL/SQL to T-SQL

This summary is not available. Please click here to view the post.

Beginner question - working with URL variables

I want to default one of my report parameters to a value passed in via the
URL. How do i reference a URL variable in the report designer?
TIA,
BrianThanks for your help Teros. I think I tried what you are suggesting, but
everytime i run the report, passing in the URL variable never seems to have
any affect on the value.
Here's what i got:
Parameter1: This is the variable that i will want to change with a URL
variable, default value is blank, no prompt.
Parameter2: Changeable parameter on the report, defaults to
Parameter1.value.
Then when I call the report, i try passing in a value for Parameter1:
http://127.0.0.1/ReportServer?ReportName&Parameter1=22
The report comes up with Parameter2 still showing blank.
Brian
"Teros" <Teros@.discussions.microsoft.com> wrote in message
news:9A646724-4C92-464B-A4C4-6BE2C6F7A98A@.microsoft.com...
> If you have your url such as:
> http://127.0.0.1/ReportServer?ReportNameX&categoryY=22
> Where ReportNameX is the report and categoryY is your parameter, you
should be able to reference it within your report by setting up a
coresponding parameter named categoryY and then using the usual
Parameters!categoryY.Value . So to have it default that value for a second
parameter, I would think you'd set up your URL passed parameter, then call
that with the Parameters!blah.value in your default value spot for the 2nd
parameter.
> Hope that helps (and works! :) )
> - T
> "G" wrote:
> > I want to default one of my report parameters to a value passed in via
the
> > URL. How do i reference a URL variable in the report designer?
> >
> > TIA,
> > Brian
> >
> >
> >|||That seems ridiculously complicated for something that should be common
practice. Passing a URL variable into a report and manipulating that
variable has to be a trivial task.
I must be doing something wrong, but not a clue what.
> Hmm... Might be that parameters can't control parameters in the same
report? Quick workaround might be to have a "shell" report that brings in
the URL parameter, then have your main report as a subreport in the shell
(taking up the entire space, so it's transparent to the end user) and pass
the second parameter based on the outer first?
> Did that make any sense?
> - T
> "G" wrote:
> > Thanks for your help Teros. I think I tried what you are suggesting, but
> > everytime i run the report, passing in the URL variable never seems to
have
> > any affect on the value.
> >
> > Here's what i got:
> >
> > Parameter1: This is the variable that i will want to change with a URL
> > variable, default value is blank, no prompt.
> > Parameter2: Changeable parameter on the report, defaults to
> > Parameter1.value.
> >
> > Then when I call the report, i try passing in a value for Parameter1:
> > http://127.0.0.1/ReportServer?ReportName&Parameter1=22
> >
> > The report comes up with Parameter2 still showing blank.
> >
> > Brian
> >
> >
> > "Teros" <Teros@.discussions.microsoft.com> wrote in message
> > news:9A646724-4C92-464B-A4C4-6BE2C6F7A98A@.microsoft.com...
> > > If you have your url such as:
> > >
> > > http://127.0.0.1/ReportServer?ReportNameX&categoryY=22
> > >
> > > Where ReportNameX is the report and categoryY is your parameter, you
> > should be able to reference it within your report by setting up a
> > coresponding parameter named categoryY and then using the usual
> > Parameters!categoryY.Value . So to have it default that value for a
second
> > parameter, I would think you'd set up your URL passed parameter, then
call
> > that with the Parameters!blah.value in your default value spot for the
2nd
> > parameter.
> > >
> > > Hope that helps (and works! :) )
> > > - T
> > >
> > > "G" wrote:
> > >
> > > > I want to default one of my report parameters to a value passed in
via
> > the
> > > > URL. How do i reference a URL variable in the report designer?
> > > >
> > > > TIA,
> > > > Brian
> > > >
> > > >
> > > >
> >
> >
> >|||I think that you have the concept of how to do this not quite right. You are
not wanting to reference the variable passed on the url, you want the url to
set the report parameter you have already defined. Steps to get this
working. First create a report with report parameters and make sure the
report is working.
Easiest is just have a blank report with two text boxes. Set the textboxes
as an expression. Using the expression builder you can set it to you
parameter. Open up the report, you will be prompted for a parameter. Fill it
in and make sure the parameter shows up on your report. Now try it via a
URL.
Bruce L-C
"G" <brian.grant@.si-intl-kc.com> wrote in message
news:O5yEaYTWEHA.4056@.TK2MSFTNGP11.phx.gbl...
> That seems ridiculously complicated for something that should be common
> practice. Passing a URL variable into a report and manipulating that
> variable has to be a trivial task.
> I must be doing something wrong, but not a clue what.
>
> > Hmm... Might be that parameters can't control parameters in the same
> report? Quick workaround might be to have a "shell" report that brings in
> the URL parameter, then have your main report as a subreport in the shell
> (taking up the entire space, so it's transparent to the end user) and pass
> the second parameter based on the outer first?
> >
> > Did that make any sense?
> > - T
> >
> > "G" wrote:
> >
> > > Thanks for your help Teros. I think I tried what you are suggesting,
but
> > > everytime i run the report, passing in the URL variable never seems to
> have
> > > any affect on the value.
> > >
> > > Here's what i got:
> > >
> > > Parameter1: This is the variable that i will want to change with a URL
> > > variable, default value is blank, no prompt.
> > > Parameter2: Changeable parameter on the report, defaults to
> > > Parameter1.value.
> > >
> > > Then when I call the report, i try passing in a value for Parameter1:
> > > http://127.0.0.1/ReportServer?ReportName&Parameter1=22
> > >
> > > The report comes up with Parameter2 still showing blank.
> > >
> > > Brian
> > >
> > >
> > > "Teros" <Teros@.discussions.microsoft.com> wrote in message
> > > news:9A646724-4C92-464B-A4C4-6BE2C6F7A98A@.microsoft.com...
> > > > If you have your url such as:
> > > >
> > > > http://127.0.0.1/ReportServer?ReportNameX&categoryY=22
> > > >
> > > > Where ReportNameX is the report and categoryY is your parameter, you
> > > should be able to reference it within your report by setting up a
> > > coresponding parameter named categoryY and then using the usual
> > > Parameters!categoryY.Value . So to have it default that value for a
> second
> > > parameter, I would think you'd set up your URL passed parameter, then
> call
> > > that with the Parameters!blah.value in your default value spot for the
> 2nd
> > > parameter.
> > > >
> > > > Hope that helps (and works! :) )
> > > > - T
> > > >
> > > > "G" wrote:
> > > >
> > > > > I want to default one of my report parameters to a value passed in
> via
> > > the
> > > > > URL. How do i reference a URL variable in the report designer?
> > > > >
> > > > > TIA,
> > > > > Brian
> > > > >
> > > > >
> > > > >
> > >
> > >
> > >
>