Showing posts with label statement. Show all posts
Showing posts with label statement. Show all posts

Sunday, March 25, 2012

Best Project Template & "The View of the event class could not be materialized"

Can someone explain this statement to me. "The View of the event class could not be materialized". I get it when I try to do a build of my Instance.

Also, yesterday I used CopySample to prepare an instance for my project. I'm not sure if this is the right way to go.

Is there a better template to use or process to follow to create a complete project (including Visual Studio Build Scripts, etc...)

Thanks

...Ray

Ray,
"The View of the event class could not be materialized" is a runtime error, not a build-time error. Are you sure you saw this while building your instance?

Here's what the error means:
Before SQL-NS executes the rules in your application (match rules or chronicle rules) it first builds views of the events and subscriptions tables. These views provide the input data for the rules: they contain only the events and subscriptions against which the rule should operate in any given firing. The error means that something when wrong during the process of setting up these views. Usually the rest of the error message provides more detail on the problem. If you could include the complete message, we'd be able to diagnose the problem better. The most common cause of this is a timeout, which can happen if the event batch is huge.

Also, please tell us what version of SQL-NS you're using - is it SQL 2000 (NS2.0) or SQL 2005? If 2005, which build are you using?

Regarding your question about the best template, you can certainly use the ones provided with the samples. However, this does involve starting from a pre-existing application and changing/removing things to make it do what you want. In my book (http://www.amazon.com/exec/obidos/tg/detail/-/0672326647/) I describe how to set up a "minimal" instance and application (with just the elements required to get it to build) and then add the code you want piece by piece. The book's CD-ROM provides all the necessary files and instructions on setting this up.

Hope this helps.
-shyam

Sunday, March 11, 2012

Best Practice for Counts/Distinct Count Measures for Filtered Data

Let's say the Count/Distinct Count measure is based on filtered data.

When the situation allows, is it better to do a CASE statement with 1s and 0s from the table in the DSV and then do a SUM on this field? Or is it better to create a new Named Query in the DSV filtering out the data through the WHERE clause based off of the original table and then do a DISTINCT COUNT measure on the newly created Named Query?

If there's no clear cut answer as to what is the "better" approach, please offer up the advantages and disadvantages for both. My preference is best performance.

An Example:

Measure: Active SKU Count
Products dimension has product_key, SKU and Status

In the DSV for the Products dimension, should I add a CASE statement:

CASE WHEN [Status] = 'Active' THEN 1 ELSE 0 END AS [Active Status]

Then create a new SUM measure based on the Active Status column.

OR

Create a new Named Query in the DSV:

SELECT * from Products WHERE Status = 'Active'

Then create a new DISTINCT COUNT measure based on the SKU column from the new Named Query?


These 2 alternatives do not produce the same results. If a given SKU has fact records in Jan, Feb, and Mar, the first approach using SUM would return a value of 3 for Calendar Quarter 1. Then second approach based on a distinct count measure would return 1 (which is probably what you want)

Distinct count measures should be implemented in their own measure group anyway, so if you only ever need the active count then filtering out the non-active records is a good option.

If you need to get distinct counts of both active and inactive SKUs then you would be better off adding the Active flag as an attribute to the product dimension and implementing it as a slowly changing dimension.

|||Darren,

I had a feeling that these 2 alternatives would not produce the same results when I was typing up the question. Perhaps my example scenario wasn't a good one, because the limited testing with my applicable situation yielded the same results.

In any event, I will just go ahead and go forward with the separate measure group with the new Named Query approach.

Thanks!

Friday, February 24, 2012

Best Command to Use for getting first instance of a value

In a select statement, I want to evaluate a value, and when I get that value, I want the value for all other rows to be set to that value so I end up with just one row.

For example:

Meeting# Vote

12345 Yes

12345 Maybe

12345 No

12345 See Comment

12345 Yes

Whenever I get a vote that says, See Comment, I want the end result to look like:

Meeting# Vote

12345 See Comment

If my result does not have See Comment, Yes will be next in line:

Meeting# Vote

12345 Yes

12345 Maybe

12345 No

12345 Yes

Whenever I get a vote that says, Yes, I want the end result to look like:

Meeting# Vote

12345 Yes

And so on.... I hope this makes sense.

Thanks, Iris

Try:

select

*

from

dbo.t1 as a

where

Vote = (

select top 1 Vote

from dbo.t1 as b

where b.Meeting# = a.Meeting#

order by

case

when Vote = 'See Comment' then 1

when Vote = 'Yes' then 2

when Vote = 'Maybe' then 3

when Vote = 'No' then 4

else 5

end

)

-- 2005

;with cte

(

select

Meeting#,

Vote,

row_number() over(partition by Meeting# order by

case

when Vote = 'See Comment' then 1

when Vote = 'Yes' then 2

when Vote = 'Maybe' then 3

when Vote = 'No' then 4

else 5

end

) as rn

from

dbo.t1

)

select

Meeting#, Vote

from

cte

where

rn = 1

go

AMB

|||

The first row for all of my meeting#'s was 'No'. So when select top 1 Vote is used, it got 'No' everytime, and set everything to no.

For the first meeting#, it looked like:

Meeting# Vote

12345 No

12345 Yes

12345 See Comment

12345 Yes

It took 'No' first, and went on to the next Meeting#. What I want it to do is be selective. If 'See Comment' exists, it takes precedent over 'Yes' and 'No'. If 'See Comment' does not exist, it will then look for 'Yes'. Then if 'Yes' or 'See Comment' does not exist, it will then set to 'No'.

Thanks, Iris

|||

It works for as expected for the sample data. If it is not what you expect, then post sample data and expected result.

use northwind

go

Code Snippet

createtable #t (

Meeting# intnotnull,

Vote varchar(25)notnull

)

insertinto #t values(12345,'No')

insertinto #t values(12345,'Yes')

insertinto #t values(12345,'See Comment')

insertinto #t values(12345,'Yes')

insertinto #t values(12346,'No')

insertinto #t values(12346,'Yes')

insertinto #t values(12346,'Yes')

insertinto #t values(12347,'No')

insertinto #t values(12347,'Maybe')

insertinto #t values(12348,'No')

selectdistinct

*

from

#t as a

where

Vote =(

selecttop 1 Vote

from #t as b

where b.Meeting# = a.Meeting#

orderby

case

when Vote ='See Comment'then 1

when Vote ='Yes'then 2

when Vote ='Maybe'then 3

when Vote ='No'then 4

else 5

end

)

orderby

Meeting#

;with cte

as

(

select

Meeting#,

Vote,

row_number()over(partitionby Meeting# orderby

case

when Vote ='See Comment'then 1

when Vote ='Yes'then 2

when Vote ='Maybe'then 3

when Vote ='No'then 4

else 5

end

)as rn

from

#t

)

select

Meeting#, Vote

from

cte

where

rn = 1

orderby

Meeting#

droptable #t

go

AMB

|||

Thanks, that worked.

Iris

Sunday, February 12, 2012

Beginner @IDENTITY question

Hello,
When I execute the following code on my localhost the INSERT statement is successfull and my ID is entered. But for some reason when I uploaded to my remote host the exact same database (MS SQL) and code it keeps trying to enter a NULL value into the ID column- which causes an error? I don't understand becuase it is the exact same app and database but at a remote location.

Is @.IDENTITY somehow saved in the session and my remote servers session settings?? are off?

strSql = "insert into Login (UserName,Password) VALUES ('" + UserName.Text.Trim() + "','" + Password.Text.Trim() + "') select @.ID = @.@.IDENTITY";

Thanks in advance for any responses.
-WileyHi Wiley,

First, you should have a ; between the statements:

strSql = "insert into Login (UserName,Password) VALUES ('" + UserName.Text.Trim() + "','" + Password.Text.Trim() + "'); select @.ID = @.@.IDENTITY";

Second, you should probably be using SCOPE_IDENTITY instead of @.@.IDENTITY. The two are similar, but SCOPE_IDENTITY returns values inserted only within the current scope. The problem with @.@.IDENTITY is that you might get the ID value from another insert operation. Check out SQL Server Books Online for a more complete description of the issue.

But that doesn't explain your problem. My best guess is that the ID field isn't defined as an identity field in the database on the remote server. That's the first thing to check.

There is a slight chance that SET IDENTITY_INSERT is off in the database. I don't recall that you can set that as the normal setting, but something in the connection might be setting it. Definitely a longshot.

Third,DON'T USE DYNAMIC SQL THIS WAY!!!!! It opens up SQL injection attacks, particularly for fields that are obviously user input. If you don't know the issues, just ask. You are setting yourself up to have your app hacked and attacked.

Don|||Hello,
Part of my problem was that I created the database locally from a .dat file in a database called Resume. Everthing worked great. When I upoaded the tables to my remote assigned database which is called "mydatabase" not Resume I ran into permission problems becuase locally the owner was dbo and remotely the owner was "mydatabase". Also some of the ID fields were not set to IDENTITY.

I basically have my application working(sort of ) now but I have to allow NULLS on every column that allows them in every table for anything to (sort of ) work -which I DID NOT have to do locally and I can't figure out why, maybe someone can shed some light on that for me.

Also, in reference to SQL injection attacks that you mentioned above , maybe you can point me to a link on ms sql security and explain to me what is wrong with using the dynamic sql statement in the way i did. I definitely don't want to start using asp.net and start off on the wrong foot opening myself up for attacks.

thanks,
-Wiley|||there's a lot of info about sql inject that you can find on the web. some of them are:

http://www.asp.net/Forums/ShowPost.aspx?tabindex=1&PostID=533341

http://www.spidynamics.com/papers/SQLInjectionWhitePaper.pdf

http://www.nextgenss.com/papers/advanced_sql_injection.pdf

http://www.nextgenss.com/papers/more_advanced_sql_injection.pdf

google rocks!|||wrong text on the first link. my mistake.

BEGIN TRANSACTION COMMIT TRANSACTION help

I had thought that if any statement failed within a BEING TRANS .. COMMIT TRANS block, then all the statements would be rolled back. But I am seeing different behavior (SQL Server 2000 8.00.2039)

For instance, run these statements to set up a test:
--DROP TABLE testTable1
--DROP TABLE testTable2
CREATE TABLE testTable1 (f1 varchar(1))
CREATE TABLE testTable2 (f1 varchar(1))
CREATE UNIQUE INDEX idx_tmptmp ON testTable1 (f1)
insert into testTable1(f1) values ('a')

So table testTable1 has a unique index on it..

Now try to run these statements:

--DELETE FROM testTable2
BEGIN TRANSACTION
insert into testTable1(f1) values ('a')
insert into testTable2(f1) values ('a')
COMMIT TRANSACTION

SELECT * FROM testTable2

..the first insert fails on the unique index.. but the second insert succeeds. Shouldn't the second insert roll back? How can I make two operations atomic?in order to roll back a transaction, you have to specifically issue the ROLLBACK command. usually after checking the error status of each operation that matters.|||Huh. That sort of stinks. I had thought an error that occurred within an explicit transaction would automatically rollback the transaction. Guess not...

So, I have modified the test as follows and it works:

DECLARE @.errorHolder int
SELECT @.errorHolder = 0
--DELETE FROM testTable2
BEGIN TRANSACTION aaa
insert into testTable1(f1) values ('a')
SELECT @.errorHolder = @.errorHolder + @.@.ERROR
insert into testTable2(f1) values ('a')
SELECT @.errorHolder = @.errorHolder + @.@.ERROR
IF @.errorHolder > 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
SELECT * FROM testTable2

Is there an easier way|||Use
SET XACT_ABORT ON
to automatically roll back a transaction if a run-time error is raised in the TSQL statements.|||thanks kaffenils, that's much easier.|||I prefer to try and catch my errors withg error handling and then I have control over my logic

BUT DDL?|||I prefer to try and catch my errors withg error handling and then I have control over my logic

I agree that TRY...CATCH is preferrable in SQL Server 2005 as it gives you more control, but craigmc is using SQL Server 2000. It is time consuming work to use @.@.ERROR if the only purpose is to rollback the transaction. XACT_ABORT could provide a qiuick and dirty solution if handling an error only means rolling back the transaction.|||Time consuming? That's your justification?!
Oh dear...|||Time consuming? That's your justification?!
Oh dear...

Did you even read what I wrote? Have I said not to use TRY...CATCH or @.@.ERROR if you need more control of exceptions?
"Oh Dear" :shocked:

All I said was that if you have multiple DML statements (and you're running on SQL Server 2000) and all you need to do if one of them fails is rollback, then it is easier to SET XACT_ABORT ON than to writing multiple
IF @.@.ERROR<>0 BEGIN ... END or [GOTO abc
On SQL Server 2005 I would always use TRY...CATCH, except for my own simple one-time ad-hoc DMLs. Then XACT_ABORT will do the job good enough.