Showing posts with label creating. Show all posts
Showing posts with label creating. Show all posts

Tuesday, March 27, 2012

Best solution for this "problem"

Alright, me and mine are planning of creating a big community, ( no futher details possible )
The point is that, we got 1 webserver with php who's reading out a huge database.
That database is beeing updated by some of a 100 users, all these users are updating the db at the same time. so as you can see, what we need is a good and reliable database witch can easily hold up this lot of data.

Wich database type do you guys recommend?

Need to know more before recommending anything, but if you are thinking about SQL Server check this link

http://www.microsoft.com/sql/editions/default.mspx

All the info about various editions of SQL Server and what they can and can not support.

Monday, March 19, 2012

Best Practice when copy table from srv to srv

Hi!

My first post in this great forum. :)
Here goes:

I need some feedback on best practice (or just possible practice!) on creating a copy of a table from one SQLserver to another SQLserver.

I have a stored proc that loops some srv/databases/table-names and need to copy a specific table out to them all.

It works ok on the local server, but when i want to go across to another server trouble starts.

I have tried various approaches.

1) Linked server followed by "Insert into remotesrv.remotedb.dbo.tabel..."
result: cant run ALTER query in remote srv. SELECT statements works fine though.

2) Replication/Subscription
result: Works in general, but it only syncronizes alike tabels. Cant alter structure of table on remote.

3) DTS
result: Works fine, but not generic enough (variable tablenames needed).

What do you guys use in these situations?Ok no replys :)

For future reference I chose the following:

If fact 2) Replication/Subscription are open for alterations of tabel stucture.(I just needed to refresh my snapshot-file in the test)

The copy of tables are therefore done via replication triggerede by a stored procedure.

Best Practice promotion of RDL from Dev to Test then onto Prod

I have been tasked with creating our promotion procedures for all
reports in Reporting Services. Can anyone give me any tips or point
me to any references that will help me with this.
Ideally I would develop the report in a dev environment against dev
data.
It would go through our quality assurance process against test data
where both the data and report layout would be verified.
Then promoted to production.
Any help is most appreciated.
Thanks.I have also been working on the deployment issue and do not have a good
process. I would also like to go this on a single SQL server. Any thoughts?
"Brian H" wrote:
> I have been tasked with creating our promotion procedures for all
> reports in Reporting Services. Can anyone give me any tips or point
> me to any references that will help me with this.
> Ideally I would develop the report in a dev environment against dev
> data.
> It would go through our quality assurance process against test data
> where both the data and report layout would be verified.
> Then promoted to production.
> Any help is most appreciated.
> Thanks.
>|||Where I work, we have seperate Dev, Staging, and Production
environments for SQL as well as the Sql Reporting servers.
Deployment consists of simply publishing Stored Procedures and RDS, and
RDL files in each environment, testing, then promoting to the next
until you reach Production.
Was there something more specific you were looking for? Do you have
any hardware constraints that might factor-in?
Regarding using a single Sql Server, I'm not entirely sure how you
might do this, since you still need a seperate ReportServer &
ReportServerTempDB instance for each environment. There may be a place
to reconfigre Sql Reporting to use different DB names so that the
different DB instances can be referenced.
Beware that all DB instances on a server share the same MASTER
database. This can be an issue when applying Service Packs, because
deployment to one target DB will still affect all DB instances via the
common MASTER database as occurs in SP-1. However, you may be able to
work around this by creating multiple database Instances on the same
server. I'm not a DBA, but I think they will have seperate MASTER
databases in that case.
I hope this helps a bit,
Lance Hunt
http://weblogs.asp.net/lhunt/

Best Practice for storing Blog text

Hi..

I am working on creating a blog for my site(not yet ready). Here's the idea --

I will have an 'Add a blog' page where it has a textarea and a submit button. I will write the blog in this textarea and hit submit. It will be added to the database and I will retrieve it in the actual blog page and bind it to a datalist. The blogtext even has the <br><font> tags etc... but then i'll hv to reframe the keywords with [keyword] I guess..

Is this the correct practice? Is storing blogtext in a SQL db a good idea..? Is there a better way of doing the same?

Thanks.

Sure, storing the blog data in SQL Server database is a usual thing, like the blog module in DNN. But anyway, beside of doing this funny thing, perhaps there is already (I assume so) a blog module out there for free with the source code which can be downloaded (with source code) and can be customized by yourself.

HTH, Jens Suessmeyer.

|||

Sure, storing the blog data in SQL Server database is a usual thing, like the blog module in DNN. But anyway, beside of doing this funny thing, perhaps there is already (I assume so) a blog module out there for free with the source code which can be downloaded (with source code) and can be customized by yourself.

HTH, Jens Suessmeyer.

|||

Thank you for the info. I have googled around a bit but found nothing.. hmmm.. if you happen to have a link, then lemme know.

Thanks.

|||

Did you have a look in www.sourceforge.net ? Once searched I found some entries, perhaps dasBlog: http://swik.net/dasBlog/SourceForge.net%3A+Project+File+Releases%3A+newtelligence+dasBlog+Community+Edition/dasblogce+dasBlog+1.8.5223.2+released+%28Mon%2C+07+Nov+2005+14%3A53%3A25+GMT%29/thr

HTH, jens Suessmeyer.

Thursday, March 8, 2012

Best practice about creating partitions dynamically in AS2000 ?

Does anyone have a good article, link or just input concering the dynamic creation and processing of partitions in a cube.

The scenario where a large amount of data is daily loaded into the data warehouse. In the ETL there is some logic creating a new table when data is comming in for a new month (example: the table Fact_Q1_2007 is created when data is being recieved on 1. januar 2007 and Fact_Q2_2007 is created when data is recieved 1.april and so on)

The question is now - how do i set up the logic to create partitions dynamic in the cube and afterwards make sure that the partitions are being processed successfully.

One approach is to use the Decision Support Objects (DSO) API, which can be invoked from tools which support COM (like DTS):

http://msdn2.microsoft.com/en-us/library/aa936638(SQL.80).aspx

>>

Decision Support Objects Programmer's Reference

Microsoft? SQL Server? 2000 Analysis Services offers substantial opportunity for you to create and integrate custom applications. The server object model, Decision Support Objects (DSO), provides interfaces and objects that can be used with any COM automation programming language

...

>>

http://msdn2.microsoft.com/en-us/library/aa177800(SQL.80).aspx

>>

...

Use the following code to create an object of ClassType clsPartition:

'Assume an object (dsoCube) of ClassType clsCube exists

Dim dsoPartition As DSO.MDStore

Set dsoPartition = dsoCube.MDStores.AddNew("MyPartition")

>>

Typically, you might then clone an existing "template" partition, and update relevant properties like SourceTable:

http://msdn2.microsoft.com/en-us/library/aa177699(SQL.80).aspx

>>

Properties, clsPartition

...

SourceTable

The name of the fact table for the partition.

SourceTableFilter

Contains the WHERE clause of the SQL statement used to determine which source table rows are to be included in the partition.

>>

|||

Thanks

I read that there should be some tools in the SQL Server 2000 Ressource Kit, which lays only on MSDN. But the File Transfer Manager isn't able to download that file ("Application validation failed, transfers are not enabled"), there seems nowhere else to get that kit. Don't know how long microsoft will take to fix their File Transfer Manager......

Wednesday, March 7, 2012

Best method for query against large database(zip codes)

I am a little unsure of the best way of doing things. I am creating a dealer locator module. I have the database which contains all necessary info such as zipcode, latitude, longitude called ZipCode. I obviously have a Dealers table which has dealer info, including zipcode. My question for the community is if there is a better way to do this process, because we in the usa alone there is over 65k zipcodes, and large chains can have thousands of locations.

Currently, i grab the zipcode and the radius the want results on from a form. I then take the zipcode and query the database, returning the lat and long of the zip the user entered on the form.

I pass this lat and long to the next procedure which incorporates the radius taken in on the form. This calculates all the lats and longs i will use to "box" off my area to perform a query on areas located within the box.

I then query Zipcodes (select * zipcodes where...) using <= and >= for all sides of the "box"(high and low latitude, and high and low longitude) at this point an inner join is done by zipcode with the zipcode on the dealers table.

Lastly, I do a distance calculation on the returned data dropping all those not within the radius.

Besides anyone have a better way to do this, does anyone have a suggestion about this:
Do I use a inner join for the dealer locator, or should I just add the lat and long to the dealers table automatically upon dealer registration of a cite?


CREATE PROCEDURE dbo.sp_GetDistanceByZip
(
@.StartZip char(5),
@.EndZip char(5)
)
AS
SET NOCOUNT ON
--DECLARE @.StartZip char(5)
--DECLARE @.EndZip char(5)

--SET @.StartZip = '92833'
--SET @.EndZip = '90005'

DECLARE @.LatA float
DECLARE @.LongA float
DECLARE @.LatB float
DECLARE @.LongB float
DECLARE @.Distance float

SET @.LatA = pi() * (Select Top 1 Latitude From zips Where ZipCode = @.startZip) / 180
SET @.LongA = pi() * (Select Top 1 Longitude From zips Where ZipCode = @.startZip) / 180
SET @.LatB = pi() * (Select Top 1 Latitude From zips Where ZipCode = @.endZip) / 180
SET @.LongB = pi() * (Select Top 1 Longitude From zips Where ZipCode = @.endZip) / 180

SET @.Distance = ACOS(
SIN
(
convert(Float,(@.LatA))
)
*SIN
(
convert(Float,(@.LATB))
)
+COS
(
convert(Float,(@.LatA))
)
*COS
(
convert(Float,(@.latB))
)
*COS
(
convert(Float,((@.longa) - (@.LongB)))
)
) * 3963.1

SELECT @.Distance, @.lata, @.longa, @.latb, @.longb
GO

My suggestion for you is to use many short selects rather than a huge join. It'll run much faster, and index it properly.|||Thank you for the reply.
Question:
If I have narrowed down the amount of possible zip codes by my "box"(only the zips, lats, and longs are going to be returned that fall within this "box"), and I then do the join, is this more taxing on the system than your suggestion?|||It is less. you should try to stay away from processing in the SQL part as much as possible.

I would also have gone with a "box" solution.|||Thanks for the input.

I would strongly suggest for anyone interested in this, to purchase the solution along with a subscription for the database. For me, this is more of a matter of being able to do it rather than saving money. If it were a matter of dollars and cents, this solution would cost me roughly forty dollars for a site, and I spent an entire day on it. I may not be the best programmer, but I am worth more than 40 a day.|||Honestly, I have the entire US/Canada, and I've run that procedure with 10 threads. Each returned in less than 2 seconds. I've condensed it to specific regions and such, but for the most part, I'm very much ok with it's performance.

Explain your "boxing" method. That interests me|||Draw a cricle, now draw a box around it, the square should touch the box at four points, all other points contained within the circle are also contained within the box. Why should we query the entire continent or world if we can calculate that box from the radius using latitude and longitude. This drastically limits the number of records that we have to due our distance calculation on. We just get the zipcode and radius from the user, pull the latitude and longitude from the database by doing a select statement of the row containing that zipcode input by the user, next we do some calculations that determine our box from the radius where the centerpoint is the zipcode. Then we select only those records whose latitude and longitude fall within our box. Then we calculate distance based on only those in the box instead of the entire country, continent, etc. Everything outside of our radius distance, gets dropped(these would have still fallen within the box). The results are returned to the user.

I hope I am clear and this explains the "box" technique. Any questions, just post back. There is a little more than what I said which is involved in this, i just tried to explain it as simple as I could. Much easier to draw something like this than use words.|||Hi, I was just wondering...did you ever happen to find a fast solution to your problem?

I am currently working on a project that involved US AND Canada zip codes.

Seeing as Canada has over 700,000 post codes I really need to find the fastest way to accomplish returning results.

The way my app will need to work is pass 1 postal code...and return all records within a 50 mile radius.

Can you please let me know how your project went? Can you please share with me how you were able to achieve this?|||This really sholud not be an issue. We have databases of 300 million records that can be parsed quickly. It is down to your indexing strategies.

For retrieving areas within a circle, use a bit of pythagoras to work out the distances. Job's done pretty easy :)|||i am also about to take on this type of project with the exact same concept...I need to retrieve a business located within a 50 mile radius from a zip code. So any input on this would be great. I am assuming I need to buy a zip code database so if anyone knows of one that will accomplish this please let me know.

if anyone is interested in building it or has the code let me know what you charge to do it.
contact me at rpanek90@.hotmail.com

Friday, February 24, 2012

Best approach to creating an annotated schema?

I have painfully found out that the schema created from a
dataset.writexmlschema does NOT create an XML schema that can be used with
the XMLBULKLOADER. Does anybody have any insights/code snippets/ideas on ho
w
to create an annotated schema with the 'sql:relation' and 'sql:field'
annotations that are necessary for the xmlbulkloader'You can check out the Books online:
http://msdn.microsoft.com/library/d...ations_0gqb.asp
Bertan ARI
This posting is provided "AS IS" with no warranties, and confers no rights.
"MSSQLServerDeveloper" <MSSQLServerDeveloper@.discussions.microsoft.com>
wrote in message news:51ABC14A-47D6-4F50-8390-F8ADF79B55D9@.microsoft.com...
> I have painfully found out that the schema created from a
> dataset.writexmlschema does NOT create an XML schema that can be used with
> the XMLBULKLOADER. Does anybody have any insights/code snippets/ideas on
how
> to create an annotated schema with the 'sql:relation' and 'sql:field'
> annotations that are necessary for the xmlbulkloader'

Best approach to creating an annotated schema?

I have painfully found out that the schema created from a
dataset.writexmlschema does NOT create an XML schema that can be used with
the XMLBULKLOADER. Does anybody have any insights/code snippets/ideas on how
to create an annotated schema with the 'sql:relation' and 'sql:field'
annotations that are necessary for the xmlbulkloader?
You can check out the Books online:
http://msdn.microsoft.com/library/de...tions_0gqb.asp
Bertan ARI
This posting is provided "AS IS" with no warranties, and confers no rights.
"MSSQLServerDeveloper" <MSSQLServerDeveloper@.discussions.microsoft.com>
wrote in message news:51ABC14A-47D6-4F50-8390-F8ADF79B55D9@.microsoft.com...
> I have painfully found out that the schema created from a
> dataset.writexmlschema does NOT create an XML schema that can be used with
> the XMLBULKLOADER. Does anybody have any insights/code snippets/ideas on
how
> to create an annotated schema with the 'sql:relation' and 'sql:field'
> annotations that are necessary for the xmlbulkloader?

Best approach for creating SQL Server 2005 reports to run against SQL 2000 database engine

We have a client who is running SQL Server 2000. They have Business Objects reports. We want to create those same reports using SQL Reporting Services and get rid of Business Objects. The trick is, we'd prefer to create them in SQL Server 2005 with VS 2005, so that we have the reports created with the newest version rather than having to use the older, less functional version to create them. What are our options here? Can we create the reports in 2005 and somehow run the reports against a 2000 database without making our client have to install a SQL Server 2005 instance?
Or will we need to create the reports in SQL Server 2000? Are there other combinations that would work where we can still create them in 2005 but run against 2000?
I'm sure they will upgrade eventually, but I'd rather not rush them at this time.
Thanks much!I think I didn′t get you right, so here are two explanations to your problem:

You can run reports against every supported datasource, this does not have to be SQL Server, it can either be Oracle, a text file, a Webservice or any other source. You can use the out-of-the-box functionality and sources or you can create your own custom provider which leaves with unlimited variety for sources. The Report Server has to be installed on a SQL Server 2005 Server. Therefore you need a SQL Server 2005 licence. If you have one of these you can also install a SQL Server 2005 database. If the clients don′t have a SQL Server 2005 licences you will be stuck to SQL Server Express Advanced Edition, which is capable to distribute it as you want but will only have access to local SQL Server database sources.

So I hope I hit on of your problems withthat answer :-)


HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

|||

Jens,

Thanks - I think that does answer my question. My customer only has SQL Server 2000 right now, so it seems I will need to build the reports in SQL Server 2000 using Visual Studio 2003 as the IDE. In our development shop we have moved on to VS 2005, so I am assuming that for new clients we can upgrade our reports to 2005 without too much effort. If you know of any issues about upgrading reports from SQL Server 2000 reporting services to SQL Server 2005, please let me know.

Thanks much!

Sunday, February 12, 2012

Beginner & DB creating

Hi!

I'm trying to make a dictionary.

WordEng(#id,word);
WordIta(#id, word);

tables with words in Italian and English.
Connect1(#id,id1,id2); connecting word form Eng with Ita translation.
Connect2(#id,id1,id2); same for Italian.

Is this correct? 3. normalisation? Will this work?is this a 1 to 1 relationship? i do not think so. one word can have multiple meanings and that means your design is not right.|||1:N relationship

In III and IV table I store all different translations.

1,A,B
2,A,C
3,D,B|||Hi,

There is a Data Model on the Database Answers web site that might give you some ideas :-
http://www.databaseanswers.org/data_models/dictionaries_for_foreign_languages/index.htm

HTH

B.Dimple
Senior DBA|||thank you.
I will look those db models|||also, please don't put a hash character '#' in your column names. that's non-standard.