Showing posts with label developing. Show all posts
Showing posts with label developing. Show all posts

Monday, March 19, 2012

Bug with InScope() and some custom code

Hi guys,

i was developing some custom code to do a running total in a matrix, and i have noticed some odd behaviour with the InScope function. I am doing year on year reporting, so i have two row groups on my matrix: the first is on month (matrix2_Calendar_Month), the second on year (matrix2_Calendar_Year).

I needed to total the number of days covered by the months i was reporting on, so i wrote some very standard code to do this, along with an expression in that column of the matrix:

=IIf(
InScope("matrix2_Calendar_Year"),
CStr( Round( Sum(Fields!Sales.Value / (24 * Code.AddDays( CStr(Fields!Calendar_Month.Value), CInt(Fields!Calendar_Year.Value))), 2)) ,
Code.getBounds()
)

Code.AddDays() calculates and returns the number of days in the month of that year on that row. Code.getBounds simply returns the lower and upper bounds of the array, plus its contents (so i can inspect them). This is what is returned in the report:

Month / Year

Sales

Capacity

% Capacity

Avg $/h

February

2006

3842

7706

49.86%

2.86

2007

0

0

0.00%

0

March

2006

4949

8692

56.94%

3.33

2007

0

0

0.00%

0

April

2006

5160

8154

63.28%

3.58

2007

0

0

0.00%

0

May

2006

3309

8348

39.64%

2.22

2007

0

0

0.00%

0

Total

17259

32900

52.46%

0-8*28,28,31,31,30,30,31,31,28

If you look at the output in the total row, you will see that Code.AddDays() has been called one extra time at the end, with Feb 2006 as its parameters, thus adding an extra 28 days to the running total. Why is Code.AddDays called on the total row, when i should be out of the scope of both the row groups? (Note: this happens for whichever row group i use in the InScope check in the expression).

Here is the custom code used for all this:

Dim numDays()

Public Function AddDays(ByVal month As String, ByVal year As Integer) As Integer
Dim thisMonth As String

Dim upper As Integer
upper = 0
On Error Resume Next
upper = UBound(numDays) + 1
ReDim Preserve numDays(upper)

thisMonth = CStr(year) & "-" & month & "-01"
numDays(upper) = DateDiff("d", CDate(thisMonth), DateAdd("m", 1, CDate(thisMonth)))
AddDays = numDays(upper)
End Function

Public Function TotalDays() As Integer
Dim lower As Integer
Dim upper As Integer

lower = 0
upper = 0
On Error Resume Next
lower = LBound(numDays)
upper = UBound(numDays)

TotalDays = 0
Dim ii As Integer
For ii = lower To upper
TotalDays = TotalDays + CInt(numDays(ii))
Next
End Function

public function getBounds() as string
getBounds = Cstr(LBound(numDays)) & "-" & CStr(UBound(numDays)) & "*" & Join(numDays, ",")
end function

sluggy

This has nothing to do with InScope().

The AddDays function is used within a IIF function call. IIF (like any other VB function call) evaluates all arguments before the function is invoked. Hence, the AddDays function is invoked in all cases.

Suggestion:
1. change the expression in the matrix cell to:
=Code.MyCalculation(InScope("matrix2_Calendar_Year"))

2. add a custom code function MyCalculation which uses IF - ELSE blocks to call the other custom code functions. Only in the case of using the conditional IF statement (instead of the IIF function) you will achieve the desired effect.

-- Robert

|||

Doh, thanks Robert, i should have known that I last did VB a few years ago, i've obviously forgotten a bit :)

sluggy

|||

No problem. I'm glad I could help resolving your issue and there is no bug in InScope :)

-- Robert

Sunday, March 11, 2012

Bug in SSAS Rolling up data?

I'm developing an Invoice cube and while I was debugging it I noticed some of the unit costs were too high, so I use the drillthrough to get the data behind it and the Row Count Measure (the one AS can add to your cube for you) had a 2 for one of the records.

All of the other records had a row count of 1, and it's my understanding that drillthrough shows the details of the data at the grain of the fact table.

I looked at the fact table and I had 2 distinct records that do indeed add up the the values represented in the erroneous record, so... Has anyone else had this? Is there a work around? Did I do something wrong?

Thanks in Advance,

Bryce

The drillthrough returns the most detail records in your cube schema aggregated at the grain of each of your dimensions, that's why you see a record count of 2 for some of the rows. What you will have to do is create a degenerate dimension off of your fact table which includes your primay key along with any other columns you want to see. You can then hide this dimension but include the columns in your drillthrough to obtain the most detailed records.

Hope this helps.

Van Dieu

|||

Do you think there will be a problem with the performance? To my understanding, the cube is going to have detailed granual level of data.

Bug in SSAS Rolling up data?

I'm developing an Invoice cube and while I was debugging it I noticed some of the unit costs were too high, so I use the drillthrough to get the data behind it and the Row Count Measure (the one AS can add to your cube for you) had a 2 for one of the records.

All of the other records had a row count of 1, and it's my understanding that drillthrough shows the details of the data at the grain of the fact table.

I looked at the fact table and I had 2 distinct records that do indeed add up the the values represented in the erroneous record, so... Has anyone else had this? Is there a work around? Did I do something wrong?

Thanks in Advance,

Bryce

The drillthrough returns the most detail records in your cube schema aggregated at the grain of each of your dimensions, that's why you see a record count of 2 for some of the rows. What you will have to do is create a degenerate dimension off of your fact table which includes your primay key along with any other columns you want to see. You can then hide this dimension but include the columns in your drillthrough to obtain the most detailed records.

Hope this helps.

Van Dieu

|||

Do you think there will be a problem with the performance? To my understanding, the cube is going to have detailed granual level of data.

Wednesday, March 7, 2012

BUG in BIDS when developing SSIS packages

There seems to be a BUG in BIDS when developing SSIS packages using the Import/Export Data wizard.

If you use the wizard to import a large number of tables, and then select all the tables, and then choose to delete exisiting data in each table, the PrologueSQL file does NOT get built correctly. Instead of having a

TRUNCATE tablename Go

for each table, it just has a bunch of "Go"s with nothing between them. In the step immediately prior, where you confirm what the wizard will do, it tells you, after each table, that it will delete any existing data...but it doesn't do this.

If, during the wizard, I select each individual table one at a time and tell it to delete existing data, then it will get built correctly, but not if I select them all at once...YET, if I do select the whole block, choose delete existing data, and then select any single table, it shows that table as being set up to delete existing rows.

This is very frustrating when trying to import large numbers of tables.

Am I missing something? or is this really a bug?

Thanks, Jeff

Jeff,

We've noticed the same behavior here. If we edit the mappings for each of the tables individually we see that the 'Delete rows in destination table' is selected (after selecting that option for the group). If we click OK for the mappings dialog the TRUNCATE options is then correctly added to the package.

I vote for bug.

Ray

|||Post it at:

http://connect.microsoft.com/sqlserver/feedback|||

Yes, this is a known bug.

The best workaround I can offer is to unselect the "Optimize for Many Tables" checkbox. The transfer will probably succeed if the number of tables is resonable (say less than 100) for your hardware (transfers will go in parallel). If it fails, try to go in multiple batches with smaller number of tables.

Thanks.

BUG in BIDS when developing SSIS packages

There seems to be a BUG in BIDS when developing SSIS packages using the Import/Export Data wizard.

If you use the wizard to import a large number of tables, and then select all the tables, and then choose to delete exisiting data in each table, the PrologueSQL file does NOT get built correctly. Instead of having a

TRUNCATE tablename Go

for each table, it just has a bunch of "Go"s with nothing between them. In the step immediately prior, where you confirm what the wizard will do, it tells you, after each table, that it will delete any existing data...but it doesn't do this.

If, during the wizard, I select each individual table one at a time and tell it to delete existing data, then it will get built correctly, but not if I select them all at once...YET, if I do select the whole block, choose delete existing data, and then select any single table, it shows that table as being set up to delete existing rows.

This is very frustrating when trying to import large numbers of tables.

Am I missing something? or is this really a bug?

Thanks, Jeff

Jeff,

We've noticed the same behavior here. If we edit the mappings for each of the tables individually we see that the 'Delete rows in destination table' is selected (after selecting that option for the group). If we click OK for the mappings dialog the TRUNCATE options is then correctly added to the package.

I vote for bug.

Ray

|||Post it at:

http://connect.microsoft.com/sqlserver/feedback|||

Yes, this is a known bug.

The best workaround I can offer is to unselect the "Optimize for Many Tables" checkbox. The transfer will probably succeed if the number of tables is resonable (say less than 100) for your hardware (transfers will go in parallel). If it fails, try to go in multiple batches with smaller number of tables.

Thanks.

Sunday, February 12, 2012

Boy am in big trouble!

I just got finished developing the company intranet site and thinking that everything was working I boasted about how good it was by getting my boss to login and submit a new job to the db (new job, its a work management app) while i did the same, the pland was to hit the submit button at the same time. He would send one to be read by me and I would send one to be read by him. We both hit submit and the following happened.

The db has somehow fused the two into one. I thought maybe we were to accurate in hitting the submit button together. But I even gave a five second delay between and for some reason the job is being overriden by one user tor the other. In other words we are both sharing the same jobid. I thought this could never happen with sql server supposing that it would lock one request until another was completed and vice versa. But I'm so new to this that I'm just so naieve to think that the db would do this for you. Problem is I'm about to move on and I can't leave the app in this state. Can anyone point to some articles or give some suggestions has to my situation. Most desperately in need!!

Thanks in advanceoh crap. that's insane man. what's the db architecture, and are you using procedures to do the transactions? need more info. it shouldn't allow multiple inserts to be merged into one unless you're using a datatable to hold the data, and you're doing that incorrectly.|||In fact, if you want, just aim me in the morning. PST, and I'll help you out then, and post the logs here.|||I use stored procedures and transactions.

Where do i get the logs?|||Nevermind. Actually.. I can't see how this is happening.

In fact, it's impossible unless you're creating the primary key manually, or trying to do something uncommonly complicated, and messed up along the way.|||This is what I'm doing and It makes sense why it doing this but I'm not sure of a solution.

Let me explain.

I have two processes.

I add a job to my db, the job contains a title, date, time etc.

When the user submits it they are returned a job number from the db. They then need to add a note that goes along with the job. This job is stored in a job table

A job can have many notes but they must add at least one, thats the default behaviour. When having submitted the note the details are stored in a note table, its foreign key is the jobid that was returned from the first insert.

IE I can access all the notes by its jobid.

The problem occurs when two people do this

User A : Adds a job

User B : Adds a job

I think user a's jobid is being written over by user b. Therefore when (user a) add a note they are actually submitting the note to the jobid of user b.

Now thats my hunch. And it certainly explains the behaviour. Problem is that I can't really afford to change the structure of the entire thing right now. What would really help is if I was able to prevent (user a's) job number from being over written. I'm using static vars.

Any suggestions would be most helpful!!|||dont you have a username/userid column tht you can use to differentiate among users ?|||I'm going to go on a limb, and assume the JobID foriegn key is an int - Identity with an increment seed of 1 right?|||I found the problem and it had nothing to do with my db. I was using a property to store my jobid, which meant that the last person to add a new job would be overwritting the previous persons job id. If user a had'nt submitted is note before user b added a new job the user a's job id would be overwritten by users b's. So user a would be submitting a note with a foreign keys from user b.

Now I know what going on I am kinda on the road to recovery.

I just need to figure out what kind of variable I can use to persist a jobid across different users so they don't overwrite one another. I'm thinking session variables. But I'm so new at this I'm not sure if theres a better way?|||property? you aren't creating a new instance of the object, and sharing it between multiple calls? That's a bit unstable don't you think?|||Sometimes we learn those hard lessons through trial and error.|||I hear ya. :) I've done the same before actually.

this is OT, but hope it makes you feel better... two nights ago, I made a while loop with three nested loops and two methods. Two of the methods work with each other to determine the end of the loop. Let's just say I forgot to set the condition to false. :) Saw my ram go straight to 2gigs of usage from a single process.

And yeah, I've done some pretty interesting things that when looking back now.. I scoff at myself, but you're right, it's a live and learn ordeal.

Good luck in finishing.