Showing posts with label browse. Show all posts
Showing posts with label browse. Show all posts

Friday, February 24, 2012

Browsing SSAS (2005) Dimensions from Excel

Hi all,

I am having difficulty understanding why my dimension names aren't showing up in a pivot table field list when I browse SSAS 2005 cubes from Excel 2003 (using the 90 PTS OLAP DLL).

Has anybody else experienced this problem where the dimension names aren't being made visible? Unfortunately all I see are my attribute names and these, by themselves, are extremely ambiguous.

Just wondering if there are any quick and easy fixes or work-arounds that anybody could recommend.

Thanks in advance,
DB
Since the hierarchies within dimensions (including the visible default attribute hierarchies) are probably what you're seeing in Excel, one work-around would be to create user-defined hierarchies with more meaningful names within the dimensions, even if they merely mimic an attribute hierarchy with an ambiguous name. The corresponding attribute hierarchies could then be hidden, to avoid confusion.

http://msdn2.microsoft.com/en-us/library/ms175631(en-US,SQL.90).aspx
>>

Configuring Multilevel Hierarchies

The Cube Wizard and the Dimension Wizard in Business Intelligence Development Studio in Microsoft SQL Server 2005 Analysis Services (SSAS) create multilevel hierarchies that are based on natural (one-to-many) relationships between columns in tables of a database. You can also create and edit hierarchies by using the Dimension Structure tab of Dimension Designer.

The Hierarchies and Levels pane of the Dimension Structure tab shows the hierarchies in the dimension. The title bar for a hierarchy shows the name of the hierarchy, which you can change by setting the Name property of the hierarchy. The levels of a hierarchy are marked by dots—the root level is marked by a single dot, and each level below the root is marked with an additional dot.

Click a hierarchy to display its properties in the Properties window. Click a level in the hierarchy to display its properties.

Creating Multilevel Hierarchies

Create multilevel hierarchies by dragging attributes from the Attributes pane of the Dimension Structure tab to a blank area on the Hierarchies and Levels pane. You can also drag columns from the Data Source View pane to the Hierarchies and Levels pane. Dragging a column creates the corresponding attribute, if the attribute does not already exist. To add an attribute or a column as a level in an existing hierarchy, drag the attribute or column to the existing hierarchy. As you drag an attribute or a column over a hierarchy, a bar indicates where the new level will be created when you drop the attribute or the column on the hierarchy.
>>

http://msdn2.microsoft.com/en-us/library/ms174939(en-US,SQL.90).aspx
>>
Showing or Hiding an Attribute Hierarchy

If you want to add an attribute as a level to multilevel hierarchies, but you do not want users to see the attribute hierarchy in client applications, set AttributeHierarchyEnabled to True and set AttributeHierarchyVisible to False. These settings prevent users from browsing an attribute hierarchy without disabling it. The AttributeHierarchyVisible setting is ignored if AttributeHierarchyEnabled is set to False.
>>

|||Your work-around makes sense. But it is a great deal of effort to do this for all of our dimension attributes in all of our dimensions.

I am still somewhat amazed that Excel 2003 isn't automatically displaying dimension names. Call me crazy but it seems to me that the name of the dimension is significant enough to display in some fashion.

Thanks,
db

Browsing Cube

Hi,

I have created, populate and processed a cube successfully and can also browse it successfully on one server. But when I deploy this cube to another server I can create, populate and process the cube sucessfully but when I go to browse the cube through SQL Management Studio I recive the following error:

The query could not be processed: o OLE DB error: OLE DB or ODBC error: You do not have permission to run 'SP_TRACE_CREATE'.; 42000.

I have checked the security and it all appears to be the same, I am using the domain administrator on both servers.

Have you any idea what my issue might be?

Your help will be greatly appreciated.

Cheers,

Hi,

I resolved this issue myself and am position what my resolution so that i may hopefully help someone else if they are to encounter the same problem.

Resolution:

I set the authentication of the datasources being used by the cube to SQL authentication and then used the SQL server 'sa' account as the user.

|||This would imply that you are using ROLAP or HOLAP storage. You should not need to use the "sa" account. A user with db_datareader rights should be enough. The "sa" user has too many rights and really should not be used for anything other than administrative tasks.|||

Hi Darren,

All dimensions of the cube are using plain MOLAP and the cube itself is using plain MOLAP also. If you have any suggestions on how I could browse the cube with using 'sa' as the datasources user then that would be great as I also think as you mentioned that 'sa' should only be used for administration purposes.

Cheers,

|||

MOLAP storage only needs a SQL connection at processing time. When you browse it does not need a connection to the relational source. You could try using SQL Profiler to see what is going on. Do you have pro-active caching or something like that configured?

Browsing Cube

Hi,

I have created, populate and processed a cube successfully and can also browse it successfully on one server. But when I deploy this cube to another server I can create, populate and process the cube sucessfully but when I go to browse the cube through SQL Management Studio I recive the following error:

The query could not be processed: o OLE DB error: OLE DB or ODBC error: You do not have permission to run 'SP_TRACE_CREATE'.; 42000.

I have checked the security and it all appears to be the same, I am using the domain administrator on both servers.

Have you any idea what my issue might be?

Your help will be greatly appreciated.

Cheers,

Hi,

I resolved this issue myself and am position what my resolution so that i may hopefully help someone else if they are to encounter the same problem.

Resolution:

I set the authentication of the datasources being used by the cube to SQL authentication and then used the SQL server 'sa' account as the user.

|||This would imply that you are using ROLAP or HOLAP storage. You should not need to use the "sa" account. A user with db_datareader rights should be enough. The "sa" user has too many rights and really should not be used for anything other than administrative tasks.|||

Hi Darren,

All dimensions of the cube are using plain MOLAP and the cube itself is using plain MOLAP also. If you have any suggestions on how I could browse the cube with using 'sa' as the datasources user then that would be great as I also think as you mentioned that 'sa' should only be used for administration purposes.

Cheers,

|||

MOLAP storage only needs a SQL connection at processing time. When you browse it does not need a connection to the relational source. You could try using SQL Profiler to see what is going on. Do you have pro-active caching or something like that configured?

Browsing content of a FullText catalog

Good afternoon,
I'm looking for a way to browse or dump the content of a SQL Server
2005 FullText catalog. I saw that cidump.exe used to be the reference
in this matter but I haven't been able to find any tool which is
working with the current version of SQL Server (2005 SP2). Do I miss
any DM or sys.fulltext_* view ?
Thx for your support on that,
Regards,
Vincent.
no, cidump was yanked as it was a security risk. There is no supported way
to browse the catalog.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<vsld@.free.fr> wrote in message
news:1176381621.429147.220630@.o5g2000hsb.googlegro ups.com...
> Good afternoon,
> I'm looking for a way to browse or dump the content of a SQL Server
> 2005 FullText catalog. I saw that cidump.exe used to be the reference
> in this matter but I haven't been able to find any tool which is
> working with the current version of SQL Server (2005 SP2). Do I miss
> any DM or sys.fulltext_* view ?
> Thx for your support on that,
> Regards,
> Vincent.
>
|||Ok. That's a sad news ! :-/
Thx for fast feedback anyway !
Vincent.

Browsing a cube in ssrs Matrix

When we browse the cube in management studio we are able to see the correct aggregates where as in the report from reporting services we are just getting the sums instead of running aggregates especially in the intersection cells of columns and rows. ( where ever the aggregrate function is difference/division. )

Is there a setting in reporting services which would make the report return the running aggregates like in the management studio.

thanks in advance

The brower is diferent in Reporting Services, but if you use the MATRIX in reporting Services, you all the cells for columns, rows and data!

Try it... if you never did that, use the wizard!

Helped?

Regards!

|||

--

|||

PedroCGD wrote:

The brower is diferent in Reporting Services, but if you use the MATRIX in reporting Services, you all the cells for columns, rows and data!

Try it... if you never did that, use the wizard!

Helped?

Regards!

Can you eloborate please.

Even when I use the matrix and the wizard still it dosent give me the running aggegates it is just giving me the sum in the intersections cells.

Any thots ?

Browser tab error/crash in BIDS when I want to browse data

I just a newbie , for SQL Analysis service 2005

I followed Analyasis Service Tutorial to create Analysis Project when I done everything after create data source , DSV , create cube and processing cube. I click deploy project and when I click browser tab it's crash and BIDS(Business Intelligence Development Studio) hang not response.

What should I do ? Any suggestion. While after deploy I can use other tab near browser tab properly.

What kind of error you are getting?

See what happens if you wait a little bit. The deployment is sending command to Analysis Server that needs to process a cube and then executes a query to show your results in BIDS. This all might take some time. So wait till deplyment finishes and then go and browse your data.

See if installing SP2 helps you getting better performace out of your cube.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Browse using hierarchies of different dimensions together

Hi,

I have 2 hierarchies in 2 different dimensins (account and product). The dimensions are related to the same fact. The product dimension does not have any custom attribute relationships defined. (because there are none, all attributes are properties of each product in the dim. I have about 60 attributes!)

I have defined hierarchies in both the dimensions. The drill down required is:

Dim_Account's Hierarchy (level1,level2,level3...) then Dim product's hierarchy (level1,level2......)

Using this gives slow performance when the user reaches the drill down to the second hierarchy.

The second dimension (hierarchy) contains about 400,000 rows !

It take about 15-20 seconds.

I have tried in Visual studio, in Proclarity and now in OWC.

1. Will using two hierarchies together give correct results?

2. How to improve the performance?

3. Can I somehow create relationships between the two dimensions, so that I can bring the second hierarchy inside the first dimension and then make it a part of the first hierarchy?

I hope my last question is understandable :-)

Please do reply if any other info is needed.

Regards

Could you explain, with examples, what the Product drill-down hierarchy levels (level1,level2....) represent, since you say that all attributes are directly properties of Product (which is presumably the key attribute)?|||

Hi Deepak,

I will elaborate,

especially because it seems to many as a unreal business scenario.

I have been trying to improve this since months, hope you can help me.

Also, apologies if the post gets too long.

Here is the entire scenario:

Cube:

The cube is made of only one flatened fact table with about 100 measures. plus there are about 100 named calculations in the dsv.

The named calculations were used because to improve query performance, we did not use calculated members.

We have MOLAP partitions for each day (query binding). Each day consisting of about appx. half a million rows. (300000 - 500000).

We need to keep 2 months of data in a rolling window.

The data contained in the fact are stock and share trading numbers.

Dimensions:

There are 4 dimensions:

a. Product

b. account

c. currency

d. date

a. product dimension:

This contains the different products as stocks and shares, their prices, their Issue dates, ratings, description, rank, category, value..and many more which describe the product. There are about 80 attributes. All these are needed because the user might want any of the information while analyzing the information.

Threre is not hierarchy or levels that can be defined because all the attributes give some detail about the product. They all describe the product in different ways.

I know that this may sound weird, but that is how the database is, and that is what the user use and want.

infact the actual database has about 200 attributes which describe the product. we picked the most important ones that were needed.

From a point of a financial application needing real time updates, write-backs , this seems valid

and without which I believe the application would have no value.

I did create a hierarchy of about 4 attributes as per the navigation needs. but the actual data does not relate to different leves, but infact they are all on the same leve.

But I did this to try and improve the query performance. - It still did not improve.

Actually I cannot measure the query performance in this dim, when using hierarchies or only attributes for browsing,

because both take more than 30-40 seconds!

Q1) How to design this dimension? should I break up this dimension into multiple ones? how can I improve query performance.

Q2) while defining attribute relationships, There is a property called 'Cardinality'.

I have tried to define as follows: wherever each a attribute value can have multiple related values of another attribute, then I set it as 'many' otherwise as 'one'.

but I have read somewhere that this property has no effect, and is there for future use . Is this right?

if not how to define this property correctly among the attribute relationships being defined?

currently the 'product' dim contains about 450,000 dimension rows.

b. Account dimension:

This dimension contains account information of traders, stockist, companies, division info, Owner, Areas, multiple Account levels, account category.etc

There are about 50 attributes. Among these some of them have relations. example: each account level is related to its higher account level.

I have defined as many attribute relationships as possible,

Q3) but I have not been able to set the key column with multiple items (key collection). I dont know if this is needed, and how to determine this need. (like in the date, month, quarter, year > date dim example. we define the key collection to contain month and date both)

I could define about 8 Hierarchies according to user navigation needs.

But again I have a design problem here:

The lowest level, key attribute is the Account number.

The account levels, categories, trader accounts all drill down to the lowest account number.

till here is seems alright. but There are about 5 other attributes in this dim, which for each of their values have multiple account numbers.

for example there is an attribute called ManagementId. each managementId can have multiple accountnumbers associated with it.

(this is what I saw from the database er diagram and the model)

So it becomes like this:

multiple account levels, each having one account number.

and multiple managementids each having one account number.

I dont know if multiple account levels can have multiple account numbers!!!! ( so is it a many to many dimension model?)

Q4) what should i do about this? how should I model this? should I break up this dimension into two or more?

but the performance of this dimension is fairly good,

Every drill down takes about 2 seconds to 4 seconds.

currently the 'account' dim has about 200,000 dimension rows .

3. The other two dimensions are date and currency.

date is the usual date with 2 hierarchies (calendar and fiscal) and attribute relationships defined.

The currency dimension currently only contains codes for different currency types, like Yen, Usd, Rup, Eur. ....etc.

but we just use this as a filter. It contains about 200 data items.

Q5) hope this is correct, because there was only one attribute, I dont know if we could ahve

If we think logically, the account and product dimensions are related ,

but I dont know how to relate them.

meaning the users want to drill down from the (account) account levels to (account) accountNumber and then to product level (product dim). (productId, product category, value....etc.)

It would mean that among the dimensions, the product dimension is the lowest granularity dimension.

Q6) How do we model this? How to achieve best performance?

I believe that I am doing something wrong somewhere or in multiple places because,

half a million rows is not a very huge data set for ssas 2005.

I appreaciate your detailed comments and help.

off lately I have not been able to implement some of the replies/help I recieved in this forum.

from which I get a feeling that my questions are ignored. I apologise for confusion. my bad.

Regards

|||

You are correct that the cardinality property on AttributeRelationships currently has no impact on the server behavior.

Using two hierarchies together (in the same query) will give correct results.

You can't create relationships between dimensions in the way you suggest. However if they contents are truely related such that product is the key to account, then you can create a single dimension. Generally, you'll want to create a single dimension for each logical business entity and include in it all the logical properties that are of interest which belong to that entity. Note that you needn't include all such properties in hierarchies or even have them all as browsable.

The best practices document at http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/olapdbpssas2005.mspx and the perfomance guide at http://download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc should prove helpful to you in improving your performance.

|||

Hi,

Thanks for the reply.

Firstly, yes I have gone through some portion of the performance guide, and probably assimilated some of it. I will try to work on it.

Next some things given in the performance guide or any microsoft guide take some things for granted, for example the database design. In my case the backend is very complicated.

Data comes from multiple databases to sybase (as a flattened fact).

but they are not joined in SSAS!, they are joined in the Universal DB2 (federated system), also called as UDB. This data is not in the form of relational tables,

but as a huge flattened fact table.

we have then created different views on top of the flattened fact. (for dimensions, for the single fact table)

The reason we could not directly use the sybase but had to put in UDB was because,

the sybase driver had restrictions to the length of the fields being pulled to ssas. we had to rename the fields to short forms.

currently UDB (DB2) driver also gives a lot of problems, and the processing time has increased from 2 min to 5 min.

The reason I explained the backend was because, since the data is in a flattened table , which we are splitting into views we do not know much of the hidden relationships in data.

due to which I always get the feeling that the data design behind the cube is itself wrong. but I have not way to determine this.

I mean should a set of attributes become one dimension, or be split up into 2 or more.

coming back to my prev questions,

some more help on them please,

1.

<......I did create a hierarchy of about 4 attributes as per the navigation needs. but the actual data does not relate to different leves, but infact they are all on the same level ...... >.

The doubt I have here is: when we create a hierarchy, the different levels of the hierarchy need not be really depicting data at different levels, do they?

I can have 3 levels in an hierarchy, but all the three levels actually are on the same level in the logical view.

for example I might be showing the foll three: dim_car > car_color, car_size, car_weight.

since all the three are just properties of a car, and thus are actually at the same logical level.

but putting them in 3 levels of a hierarchy would actually define the navigation path for the user.

and also there is no relation between the size, color and weight (no attribute relatinships)

thus the hierarchy does not become a natural hierarchy.

2. can I break up a dimension into multiple ones? by moving the logically related data to different dimensions.

will this improve the query performance? if so what should I consider before doing it (like key attributes, their relations with fact)?

3. <........... but I have not been able to set the key column with multiple items (key collection). I dont know if this is needed, and how to determine this need. (like in the date, month, quarter, year > date dim example. we define the key collection to contain month and date both) ...............>

any ideas on this?

4. Determining a many-to-many dimension,

This is easier if they are all different dimensions.

but in my case they might be just attributes of the same dimension,

so I might have to seperate them and build a new dimension and then use the many-to-many dimension design.

so the question is how to indetify this from the underlying data?

and can this impact the performance?

sorry I know some of these might be answered in the performance doc,

but the scenario becomes different to relate to sometimes.

Thanks in advance for helping

Regards

browse to reports

Hi,
I have created reports on my local machine as a test.
I can view the reports fine but others can not browse to these reports on my
machine.
I am not using any code, just the front end report creating of reporting
services in Visual studio .net
How can I allow other to view these reports on my machine.
ThanksHi,
Surf to http://servername/reports, click on the properties-tab and add the
users you want to give access.
hth,
Pieter
"Farsh" <Farsh@.discussions.microsoft.com> wrote in message
news:1D124954-0038-4726-8BAC-CDBE70EC6F38@.microsoft.com...
> Hi,
> I have created reports on my local machine as a test.
> I can view the reports fine but others can not browse to these reports on
> my
> machine.
> I am not using any code, just the front end report creating of reporting
> services in Visual studio .net
> How can I allow other to view these reports on my machine.
> Thanks|||first of all u should tell whats the error ppl r getting when they r
trying to access ur report so that it could properly b pointed out n
rectified as per my understanding i believe this would b RSACCESSDENIED
error n u ve to add users login to PERMIT through REPORT MANAGER check
security features/tab in report manager
hope it ll answer|||Thanks
"Pieter van Maasdam" wrote:
> Hi,
> Surf to http://servername/reports, click on the properties-tab and add the
> users you want to give access.
> hth,
> Pieter
> "Farsh" <Farsh@.discussions.microsoft.com> wrote in message
> news:1D124954-0038-4726-8BAC-CDBE70EC6F38@.microsoft.com...
> > Hi,
> > I have created reports on my local machine as a test.
> > I can view the reports fine but others can not browse to these reports on
> > my
> > machine.
> > I am not using any code, just the front end report creating of reporting
> > services in Visual studio .net
> >
> > How can I allow other to view these reports on my machine.
> > Thanks
>
>

Browse through a list of databases on a given server

Hi,
I would like to allow the user of my ADO.NET application to browse through a list of the available databases on a given server. INFORMATION_SCHEMA in SQL Server Books Online doesn't describe how to do this. How can I do this?
TIA,
RoyceSELECT * FROM master..sysdatabases|||In VB6,use traditional adodb object.We can get any level schema infomation of sql server by OpenSchema method.
I pasted VB6 sample codes ,hope it should be helpful to u.

example:
Dim cn As New ADODB.Connection
Dim rs As ADODB.Recordset

cn.Open "Provider=SQLOLEDB.1;Password="""";Persist Security Info=True;User ID=sa;Data Source=wmcw"
Set rs = cn.OpenSchema(adSchemaCatalogs)

Do While Not rs.EOF
Debug.Print rs.Fields("CATALOG_NAME").Value & vbTab & rs.Fields("DESCRIPTION").Value
rs.MoveNext
Loop

Browse Replication commands

Hi,
We are trying to setup a two-way replication from STL to China. We
know that the network is very unreliable, and not very good bandwidth.
Inorder to decide if this is feasible, I need to provide information
about how much data will we be replicating everyhour, and how big a
transaction can be in kb.
In transactional replication, when we issue an update or a delete
statement on a table, that affects 100 rows, when the change gets
replicated, will it issue 100 cmds on subscriber or just 1 statement,
that would affect 100 rows. I would like to know if we will be sending
100 statements over the network or just 1.
When I look at distribution agent history, it tells me the no. of cmds
that it transfered and the rate of no. of trans/sec, latency etc... Is
there a way to know the size of the transaction in KB's?
In order to get a good estimate on the size of the data that would be
replicated, can I rely on the size of Tlog backups generated every day?
The database that we plan to replicate is 17GIG, to do an initial
snapshot, it would take a while, is there an easy way to sync up the
database, incase it goes out of sync( for eg: Restore the database, and
start replication).
Thanks for your help.
GG
Tlogs tend to be much larger than the bandwidth consumed by transactional
replication, however this depends on many factors, such as what sort of
indexing optimizations you are doing (which are logged).
I would look at replicating the execution of stored procedures as this will
transform the 100 singletons to a single command.
And yes 100 commands will go across the network. There is no way I know of
to get an idea of the bandwidth requirements.
The best way to deploy the snapshot is from a backup. Change all the
identity cols, constraints, triggers to Not For Replication on the
subscriber.
The best way to resync is to set your history retention to 1 week or so, and
your minimum transaction to 2 days. This way if your subscriber fails, you
can restore from backup (yesterdays) and all the missing transactions will
be automatically backfilled!
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"GG" <gdabbara@.gmail.com> wrote in message
news:1117649116.626833.200590@.g49g2000cwa.googlegr oups.com...
> Hi,
> We are trying to setup a two-way replication from STL to China. We
> know that the network is very unreliable, and not very good bandwidth.
> Inorder to decide if this is feasible, I need to provide information
> about how much data will we be replicating everyhour, and how big a
> transaction can be in kb.
> In transactional replication, when we issue an update or a delete
> statement on a table, that affects 100 rows, when the change gets
> replicated, will it issue 100 cmds on subscriber or just 1 statement,
> that would affect 100 rows. I would like to know if we will be sending
> 100 statements over the network or just 1.
> When I look at distribution agent history, it tells me the no. of cmds
> that it transfered and the rate of no. of trans/sec, latency etc... Is
> there a way to know the size of the transaction in KB's?
> In order to get a good estimate on the size of the data that would be
> replicated, can I rely on the size of Tlog backups generated every day?
> The database that we plan to replicate is 17GIG, to do an initial
> snapshot, it would take a while, is there an easy way to sync up the
> database, incase it goes out of sync( for eg: Restore the database, and
> start replication).
> Thanks for your help.
> GG
>
|||I am sorry! What does index optimizations mean? We do use indexes
extensively, as we are replicating an OLTP system. We will be
replicating the whole database, and we will set notfor replication
options on triggers and identity columns. As we are replicating the
entire database, do you think Tlog activity should reflect the size of
replicated data?
Yes, we will setup to use stored procs instead of TSQL statements. How
would they transform 100 singletons to one command. I thought they are
effecient because they are cached and the parameters are passed in as
binary values.
When you said restore from the backup if subscriber goes out of sync,
did you mean the subscriber database backup or the publisher? If it
is subscriber's backup, then that would be good, that way we don't have
to mail in a copy of our backup everytime something goes out of sync.
Looks like this might be better than having to compress snapshot files,
and send it to other location.
One question though, how does it know what transactions to apply to the
subscriber, because my subscriber could have gone out of sync say at
10:00AM, and then when If I restore the subscriber from previous
night's backup, how does it know from when to start applying the
transactions. Does it read the Tlogs to find the LSN or something?
Thanks a lot of your reply.
GG

Browse mssqlsystemresource.mdf (SQL Server's Resource Database)

Is there a way to have the Resource Database (mssqlsystemresource.mdf) appear
in the list of Databases in SQL Server Management Studio for easy browsing of
it's contents, or is the only browse option to open the MDF file with a text
editor such as notepad?
Thanks.
Chris Jones
Application Development Consultant
Giving credit where credit is due, I found the following info I am about to
post at the following location:
http://www.aspfaq.com/sql2005/show.asp?id=28
Since the engine has hooks that hide the mssqlsystemresource database from
users, you don't have direct access to it through the GUI.
However, there is a way around this:
1.)Determine where the system databse files live, and keep this path handy:
USE master
GO
SELECT REPLACE(filename, 'master.mdf', '')
FROM sys.sysfiles
WHERE fileid = 1
2.)Stop the SQL Server service;
3.)Copy the files mssqlsystemresource.*df -> resource_copy.*df.
NOTE: do *not* rename or remove the mssqlsystemresource files!
4.)Start the SQL Server service;
5.)Run the following code in a new query window:
EXEC sp_attach_db
'Resource_Copy',
'<path from above>resource_copy.mdf',
'<path from above>resource_copy.ldf'
6.)Now, the system will no longer identify this database as a "special"
database
7.)You can easily scan through the names of system objects that might
otherwise be unknown (and browse any code associated with these objects) from
a GUI (like SQL Server Management Studio).
Thanks.
Chris Jones
Application Development Consultant
"Chris" wrote:

> Is there a way to have the Resource Database (mssqlsystemresource.mdf) appear
> in the list of Databases in SQL Server Management Studio for easy browsing of
> it's contents, or is the only browse option to open the MDF file with a text
> editor such as notepad?
> --
> Thanks.
> Chris Jones
> Application Development Consultant
|||Yes, if you start your SQL Server is single user mode.
Why do you need to see the contents? It just contains definitions of stored
procedures and views, and you can get to all those using the
OBJECT_DEFINITION function.
HTH
Kalen Delaney, SQL Server MVP
http://sqlblog.com
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:83427BEB-8A5E-4A21-B8CC-EF65E1BEF9DD@.microsoft.com...
> Is there a way to have the Resource Database (mssqlsystemresource.mdf)
> appear
> in the list of Databases in SQL Server Management Studio for easy browsing
> of
> it's contents, or is the only browse option to open the MDF file with a
> text
> editor such as notepad?
> --
> Thanks.
> Chris Jones
> Application Development Consultant

Browse mssqlsystemresource.mdf (SQL Server's Resource Database)

Is there a way to have the Resource Database (mssqlsystemresource.mdf) appea
r
in the list of Databases in SQL Server Management Studio for easy browsing o
f
it's contents, or is the only browse option to open the MDF file with a text
editor such as notepad?
Thanks.
Chris Jones
Application Development ConsultantGiving credit where credit is due, I found the following info I am about to
post at the following location:
http://www.aspfaq.com/sql2005/show.asp?id=28
Since the engine has hooks that hide the mssqlsystemresource database from
users, you don't have direct access to it through the GUI.
However, there is a way around this:
1.)Determine where the system databse files live, and keep this path handy:
USE master
GO
SELECT REPLACE(filename, 'master.mdf', '')
FROM sys.sysfiles
WHERE fileid = 1
2.)Stop the SQL Server service;
3.)Copy the files mssqlsystemresource.*df -> resource_copy.*df.
NOTE: do *not* rename or remove the mssqlsystemresource files!
4.)Start the SQL Server service;
5.)Run the following code in a new query window:
EXEC sp_attach_db
'Resource_Copy',
'<path from above>resource_copy.mdf',
'<path from above>resource_copy.ldf'
6.)Now, the system will no longer identify this database as a "special"
database
7.)You can easily scan through the names of system objects that might
otherwise be unknown (and browse any code associated with these objects) fro
m
a GUI (like SQL Server Management Studio).
--
Thanks.
Chris Jones
Application Development Consultant
"Chris" wrote:

> Is there a way to have the Resource Database (mssqlsystemresource.mdf) app
ear
> in the list of Databases in SQL Server Management Studio for easy browsing
of
> it's contents, or is the only browse option to open the MDF file with a te
xt
> editor such as notepad?
> --
> Thanks.
> Chris Jones
> Application Development Consultant|||Yes, if you start your SQL Server is single user mode.
Why do you need to see the contents? It just contains definitions of stored
procedures and views, and you can get to all those using the
OBJECT_DEFINITION function.
HTH
Kalen Delaney, SQL Server MVP
http://sqlblog.com
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:83427BEB-8A5E-4A21-B8CC-EF65E1BEF9DD@.microsoft.com...
> Is there a way to have the Resource Database (mssqlsystemresource.mdf)
> appear
> in the list of Databases in SQL Server Management Studio for easy browsing
> of
> it's contents, or is the only browse option to open the MDF file with a
> text
> editor such as notepad?
> --
> Thanks.
> Chris Jones
> Application Development Consultant

Sunday, February 19, 2012

Browse date

After I built the cube I clicked browse date in the cube
file and received the message Undefined error. What gives?
How do I access the data. I'm new to this program and was
following the sample program that came with Analysis
Manager.
Thanks for the helpYou should re-post this on the Datawarehouse newsgroup... They should be
able to help you there.
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Gary" <gsylvester@.npgcable.com> wrote in message
news:2b9ea01c46881$22a97fa0$a401280a@.phx.gbl...
> After I built the cube I clicked browse date in the cube
> file and received the message Undefined error. What gives?
> How do I access the data. I'm new to this program and was
> following the sample program that came with Analysis
> Manager.
> Thanks for the help

Browse date

After I built the cube I clicked browse date in the cube
file and received the message Undefined error. What gives?
How do I access the data. I'm new to this program and was
following the sample program that came with Analysis
Manager.
Thanks for the helpYou should re-post this on the Datawarehouse newsgroup... They should be
able to help you there.
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Gary" <gsylvester@.npgcable.com> wrote in message
news:2b9ea01c46881$22a97fa0$a401280a@.phx
.gbl...
> After I built the cube I clicked browse date in the cube
> file and received the message Undefined error. What gives?
> How do I access the data. I'm new to this program and was
> following the sample program that came with Analysis
> Manager.
> Thanks for the help

Browse date

After I built the cube I clicked browse date in the cube
file and received the message Undefined error. What gives?
How do I access the data. I'm new to this program and was
following the sample program that came with Analysis
Manager.
Thanks for the help
You should re-post this on the Datawarehouse newsgroup... They should be
able to help you there.
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Gary" <gsylvester@.npgcable.com> wrote in message
news:2b9ea01c46881$22a97fa0$a401280a@.phx.gbl...
> After I built the cube I clicked browse date in the cube
> file and received the message Undefined error. What gives?
> How do I access the data. I'm new to this program and was
> following the sample program that came with Analysis
> Manager.
> Thanks for the help

Browse Cube Causes CPU Spike & Unresponsive SSAS

Hello,

I'm interested to see if anyone else has seen the behavior I have experienced. I could have missed a KB article, however, hoping that someone will help with some advice anyhow.

I have a cube (it is around 400 mb). Every morning I execute a SSIS package to process the cube (full process). This works every morning without any failures. When I came in this morning, I opened Management Studio, connected to SSAS, went to the cube, right clicked, and chose browse.

In Task Manager, the Mem Usage by msmdsrv.exe started around 411,000. Once I chose browse, the cpu jumped to 40-60 percent. It has been almost 30 minutes now, and the cpu is still in that range. The mem usage has risen to 530,000, but I expected this.

The problem is the CPU. I can't see any reason for such heavy cpu usage for 30 minutes +. If I restart the SSAS service, and browse the cube again, it seems to work fine. I have seen this happne 4-5 times now.

We are using SQL Server 2005 enterprise (no SP) on Windows Server 2003 SP 1. The machine has 1.5 gig of ram.

Any suggestions or direction is greatly appreciated.

Regards,


Dan

This is strange.

It is hard to say what is going on.
Try and run Profiler connect to Analysis Server and see what it is doing during these 30 min. See what kind of activity reported.

Try and run ActivityViewer sample application and see which sessions are active and what is getting executed.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Thanks for your response Edward. I followed your suggestions, but I'm not sure it yielded any useful results. Below is what I have typed up & pasted as I was looking through things.

I’ll start from the ActivityViewer.While I got a few errors from the JIT compiler getting it to run, I managed to hit continue each time and saw some interesting information.While I’m not 100% on how to interpret the findings, this is what I found:

Under Sessions, I found 25295. It is on the same cube/database that I tried to “Browse”.It appears that all the times show something around 12pm (although it is 8am… I’m going to ignore this for the time being) and under last command it has MDSCHEMA_CUBES.

If find this under “Transactions”

TRANSACTION_ID

TRANSACTION_SESSION_ID

TRANSACTION_START_TIME

TRANSACTION_ELAPSED_TIME_MS

TRANSACTION_CPU_TIME_MS

96EC1996-B252-4F80-A0FA-330EE37476BB

25A40F0E-A6FD-4FBA-A6FD-17DFFC0F2340

7/6/2006 12:36 PM

4176799

31

And this under “Locks”.The same Transaction ID above is causing the Lock.

SPID

LOCK_ID

LOCK_TRANSACTION_ID

LOCK_OBJECT_ID

LOCK_STATUS

LOCK_TYPE

LOCK_CREATION_TIME

LOCK_GRANT_TIME

25295

c6eceba6-19bc-480b-96fb-989273e4eade

96ec1996-b252-4f80-a0fa-330ee37476bb

<Object xmlns="urn:schemas-microsoft-com:xml-analysis:rowset"><DatabaseID>dw_prep_archive_PROD</DatabaseID></Object>

1

8

7/6/2006 12:36 PM

7/6/2006 12:36 PM

The time shown under the “lock creation time” is the time that I tried to browse the cube this morning.The profiler trace now shows 13:48 so this whole process has been occurring for over an hour (and my cpu is still above 45%).

This first set of data (below) from the profiler might show something useful, but I'm not sure what it means.

The second set below (seperated by a line) shows the set of commands that loop over and over again every 2 minutes or so. I know below is incredibly messy, but it basically shows “Flight recorder begin” followed by “discover locks”, “discover transactions”, “discover locks”, and then “Flight Recorder Snapshot End”.

Although I am relatively new to SSAS, I don’t think I see anything that could be causing this.Does anyone have any insight on what is happening here?

I really don't expect anyone to try to decipher below completely, but if anyone sees any unusual commands, I would appreciate it.

The only thing I can see is that my initial connection to browse the cube is trying to do something and in a transaction and is blocking everything else.

Regards,


Dan



Upon Connection

Discover Begin 26 - DISCOVER_PROPERTIES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<PropertyName>ProviderVersion</PropertyName>
</RestrictionList> 101 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:20.000 2006-07-06 12:35:20.000 3572 25267 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Timeout>0</Timeout> <SspropInitAppName>Micros
Discover End 26 - DISCOVER_PROPERTIES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<PropertyName>ProviderVersion</PropertyName>
</RestrictionList> 101 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:20.000 2006-07-06 12:35:20.000 0 0 3572 25267 0 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Timeout>0</Timeout> <SspropInitAppName>Micros
Discover Begin 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis"><ObjectExpansion>ExpandFull</ObjectExpansion></RestrictionList> 99 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:22.000 2006-07-06 12:35:22.000 25271 QG
Discover End 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis"><ObjectExpansion>ExpandFull</ObjectExpansion></RestrictionList> 99 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:22.000 2006-07-06 12:35:37.000 15107 0 25271 2625 QG
Discover Begin 26 - DISCOVER_PROPERTIES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<PropertyName>DBMSVersion</PropertyName>
</RestrictionList> 101 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:39.000 2006-07-06 12:35:39.000 3572 25267 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Timeout>0</Timeout> <SspropInitAppName>Micros
Discover End 26 - DISCOVER_PROPERTIES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<PropertyName>DBMSVersion</PropertyName>
</RestrictionList> 101 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:39.000 2006-07-06 12:35:39.000 0 0 3572 25267 0 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Timeout>0</Timeout> <SspropInitAppName>Micros
Discover Begin 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis"><ObjectExpansion>ExpandObject</ObjectExpansion></RestrictionList> 103 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:44.000 2006-07-06 12:35:44.000 25280 QG
Discover End 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis"><ObjectExpansion>ExpandObject</ObjectExpansion></RestrictionList> 103 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:44.000 2006-07-06 12:35:44.000 47 0 25280 47 QG
Discover Begin 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis"><DatabaseID>dw_prep_archive_PROD</DatabaseID><ObjectExpansion>ExpandObject</ObjectExpansion></RestrictionList> 102 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:48.000 2006-07-06 12:35:48.000 25284 QG
Discover End 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis"><DatabaseID>dw_prep_archive_PROD</DatabaseID><ObjectExpansion>ExpandObject</ObjectExpansion></RestrictionList> 102 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:48.000 2006-07-06 12:35:48.000 31 0 25284 31 QG
Discover Begin 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<ObjectExpansion>ExpandObject</ObjectExpansion>
</RestrictionList> 105 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:50.000 2006-07-06 12:35:50.000 3572 25289 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Timeout>0</Timeout> <SspropInitAppName>Micros
Discover End 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<ObjectExpansion>ExpandObject</ObjectExpansion>
</RestrictionList> 105 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:50.000 2006-07-06 12:35:50.000 47 0 3572 25289 47 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Timeout>0</Timeout> <SspropInitAppName>Micros
Discover Begin 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<DatabaseID>dw_prep_archive_PROD</DatabaseID>
<ObjectExpansion>ExpandObject</ObjectExpansion>
</RestrictionList> 105 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:52.000 2006-07-06 12:35:52.000 3572 25289 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Timeout>0</Timeout> <SspropInitAppName>Micros
Discover End 31 - DISCOVER_XML_METADATA <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<DatabaseID>dw_prep_archive_PROD</DatabaseID>
<ObjectExpansion>ExpandObject</ObjectExpansion>
</RestrictionList> 105 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:35:52.000 2006-07-06 12:35:52.000 31 0 3572 25289 31 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Timeout>0</Timeout> <SspropInitAppName>Micros
Discover Begin 26 - DISCOVER_PROPERTIES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<PropertyName>DBMSVersion</PropertyName>
</RestrictionList> 106 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:36:04.000 2006-07-06 12:36:04.000 3572 25295 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Catalog>dw_prep_archive_PROD</Catalog> <ShowH
Discover End 26 - DISCOVER_PROPERTIES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<PropertyName>DBMSVersion</PropertyName>
</RestrictionList> 106 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:36:04.000 2006-07-06 12:36:04.000 0 0 3572 25295 0 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Catalog>dw_prep_archive_PROD</Catalog> <ShowH
Discover Begin 4 - MDSCHEMA_CUBES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<CATALOG_NAME>dw_prep_archive_PROD</CATALOG_NAME>
<CUBE_NAME>DW PREP ARCHIVE</CUBE_NAME>
<CUBE_SOURCE>3</CUBE_SOURCE>
</RestrictionList> 106 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:36:04.000 2006-07-06 12:36:04.000 3572 25295 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Catalog>dw_prep_archive_PROD</Catalog> <ShowH
Discover End 4 - MDSCHEMA_CUBES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<CATALOG_NAME>dw_prep_archive_PROD</CATALOG_NAME>
<CUBE_NAME>DW PREP ARCHIVE</CUBE_NAME>
<CUBE_SOURCE>3</CUBE_SOURCE>
</RestrictionList> 106 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:36:04.000 2006-07-06 12:36:04.000 78 0 3572 25295 16 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Catalog>dw_prep_archive_PROD</Catalog> <ShowH
Discover Begin 4 - MDSCHEMA_CUBES 106 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:36:09.000 2006-07-06 12:36:09.000 3572 25295 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Catalog>dw_prep_archive_PROD</Catalog> <ShowH
Discover End 4 - MDSCHEMA_CUBES 106 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:36:09.000 2006-07-06 12:36:09.000 0 0 3572 25295 0 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Catalog>dw_prep_archive_PROD</Catalog> <ShowH
Discover Begin 4 - MDSCHEMA_CUBES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<CUBE_SOURCE>3</CUBE_SOURCE>
</RestrictionList> 106 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:36:09.000 2006-07-06 12:36:09.000 3572 25295 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Catalog>dw_prep_archive_PROD</Catalog> <ShowH
Discover End 4 - MDSCHEMA_CUBES <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<CUBE_SOURCE>3</CUBE_SOURCE>
</RestrictionList> 106 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:36:09.000 2006-07-06 12:36:09.000 16 0 3572 25295 16 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Catalog>dw_prep_archive_PROD</Catalog> <ShowH
Discover Begin 5 - MDSCHEMA_DIMENSIONS <RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">
<CUBE_NAME>DW PREP ARCHIVE</CUBE_NAME>
<CUBE_SOURCE>3</CUBE_SOURCE>
</RestrictionList> 106 DMLENZ Microsoft SQL Server Management Studio 2006-07-06 12:36:10.000 2006-07-06 12:36:10.000 3572 25295 QG <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"> <Catalog>dw_prep_archive_PROD</Catalog> <ShowH
Query Subcube 2 - Non-cache data 00000000,000000000000000000000000,0000000,00,00000000,00000000,00000000 0 2006-07-06 12:36:10.000 2006-07-06 12:36:10.000 797 dw_prep_archive_PROD 25307 531
Query Subcube 2 - Non-cache data 00000000,000000000100100000011000,0000000,00,00000000,00000000,00000000 0 2006-07-06 12:36:10.000 2006-07-06 12:36:11.000 219 dw_prep_archive_PROD 25307 1156
Query Subcube 2 - Non-cache data 00000000,000000000100100000011000,0000000,00,00000000,00000000,00000000 0 2006-07-06 12:36:19.000 2006-07-06 12:36:19.000 406 dw_prep_archive_PROD 25307 9656


Profiler that "loops"

Notification11 - Flight Recorder Snapshot Begin0 2006-07-06 12:52:29.0002006-07-06 12:52:29.000 25544

Server State Discover Begin8 - DISCOVER_LOCKS<RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">

<LOCK_MIN_TOTAL_MS>0</LOCK_MIN_TOTAL_MS>

</RestrictionList>0 2006-07-06 12:52:29.0002006-07-06 12:52:29.000 25545 <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"></PropertyList>

Server State Discover End8 - DISCOVER_LOCKS<RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">

<LOCK_MIN_TOTAL_MS>0</LOCK_MIN_TOTAL_MS>

</RestrictionList>0 2006-07-06 12:52:29.0002006-07-06 12:52:29.0000255450

Server State Discover Begin7 - DISCOVER_JOBS<RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">

<JOB_MIN_TOTAL_TIME_MS>0</JOB_MIN_TOTAL_TIME_MS>

</RestrictionList>0 2006-07-06 12:52:29.0002006-07-06 12:52:29.000 25546 <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"></PropertyList>

Server State Discover End7 - DISCOVER_JOBS<RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">

<JOB_MIN_TOTAL_TIME_MS>0</JOB_MIN_TOTAL_TIME_MS>

</RestrictionList>0 2006-07-06 12:52:29.0002006-07-06 12:52:29.0000255460

Server State Discover Begin2 - DISCOVER_SESSIONS<RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">

</RestrictionList>0 2006-07-06 12:52:29.0002006-07-06 12:52:29.000 25547 <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"></PropertyList>

Server State Discover End2 - DISCOVER_SESSIONS<RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">

</RestrictionList>0 2006-07-06 12:52:29.0002006-07-06 12:52:29.0000255470

Server State Discover Begin1 - DISCOVER_CONNECTIONS<RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">

</RestrictionList>0 2006-07-06 12:52:29.0002006-07-06 12:52:29.000 25548 <PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis"></PropertyList>

Server State Discover End1 - DISCOVER_CONNECTIONS<RestrictionList xmlns="urn:schemas-microsoft-com:xml-analysis">

</RestrictionList>0 2006-07-06 12:52:29.0002006-07-06 12:52:29.0000255480

Notification12 - Flight Recorder Snapshot End0 2006-07-06 12:52:29.0002006-07-06 12:52:29.000025544

|||

This is great investigation you've done!

Unfortunately, I think the situation probably indicates problem with Analysis Services.
Dont see anything you should be doing differently.

Before you go and log your case with Customer Support Services can you please try and find a machine you can install Analysis Services SP1 into.
And see if your situaion is solved in SP1.

Feel free to contact me if you get stuck with your efforts.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

Thanks for your suggestions Edward.

I'm pretty sure I found the culprit. Whether it is a bug in SSAS (or bad mdx), the person who created the cube has calculations and named sets using the calculations. I'm not entirely sure what the purpose of them were, however when I removed the named sets. the problem was solved. The cube allows me to click "Browse" without any hangups.

I am in the process of utilizing a support contract we have to determine if this is a problem.

Thanks again.

Regards,


Dan

Browse Cube

Just a question please,

When i browse(query) the cube in analysis services how can i save those queries?Can they be saved anyware?

(By browse i mean explore,drill down data in the Cube Browser of Analysis services 2005.)

Help please !!!

No, you cannot save queries done through the cube browser in the BI Development Studio (BIDS).

You could use Profiler to capture the MDX queries that are sent, but you would then have to use a different tool (like SSMS or Reporting Services) to run those queries. BIDS is not really meant to be a reporting tool. You will need to look at a different tool to do this, Excel pivot tables are probably one of the cheaper, more accessible solutions.

|||

You can capture the queries using SQL Server Profiler:

http://msdn2.microsoft.com/en-us/library/ms174779.aspx

>>

SQL Server 2005 Books Online

Introduction to Monitoring Analysis Services with SQL Server Profiler

You can use SQL Server Profiler to monitor events generated by an instance of Microsoft . By using SQL Server Profiler, you can do the following:

Monitor the performance of an instance of Analysis Services.

Debug Multidimensional Expressions (MDX) statements.

Identify MDX statements that run slowly.

Test MDX statements in the development phase of a project by stepping through statements to confirm that the code works as expected.

Troubleshoot problems in Analysis Services by capturing events on a production system and replaying them on a test system. This approach is useful for testing or debugging purposes and lets users continue to use the production system without interference.

Audit and review activity that occurred on an instance of Analysis Services. A security administrator can review any one of the audited events. This includes the success or failure of a login try and the success or failure of permissions in accessing statements and objects.