I've got a report that can have about 50 possible parameter combinations. The
underlying query is fairly expensive, so I want to cache the dataset every
morning after I expire yesterday's cache. So I set up a data-driven
subscription to run every morning and execute all the parameter combinations
and deliver it to the Null delivery extension. The problem is that when the
scheduled time rolls around, it tries to run all 50 parameter combinations at
once. (Actually, it looks like it runs 2 or 3 combinations a second without
waiting for the first one to finish.) The query of the data sources run fine,
but I get a deadlock when Reporting Services tries to write the dataset to
the cache. Somewhere between 10% and 50% of the 50 fail every day. The logs
show a standard deadlock error. The error is below.
Is there any config file setting to tell RS to retry writing to the cache a
few times before giving up because of a deadlock? Is this possibly a SP2 fix?
Or is there any way to tell Reporting Services to run the 50 executions
serially instead of concurrently? Or do I need to investigate creating some
custom code to execute and cache the reports on schedule (so I can make it
execute them serially)?
Please let me know if you need more info... or if you need something from
the SQL Server logs. Thanks in advance.
--
ReportingServicesService!library!2e72c!10/28/2004-21:25:21:: e ERROR:
Throwing
Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException:
An internal error occurred on the report server. See the error log for more
details., ;
Info:
Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException:
An internal error occurred on the report server. See the error log for more
details. --> System.Data.SqlClient.SqlException: Transaction (Process ID
132) was deadlocked on lock resources with another process and has been
chosen as the deadlock victim. Rerun the transaction.
at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior
cmdBehavior, RunBehavior runBehavior, Boolean returnStream)
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
at
Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteNonQuery()
at
Microsoft.ReportingServices.Library.DBInterface.AddReportToExecutionCache(Guid
reportId, ReportSnapshot snapshotData, DateTime executionDateTime, DateTime&
expirationDateTime)
at
Microsoft.ReportingServices.Library.RSService.RenderAsLiveOrSnapshot(CatalogItemContext
reportContext, ClientRequest session, Warning[]& warnings,
ParameterInfoCollection& effectiveParameters)
at
Microsoft.ReportingServices.Library.RSService.RenderFirst(CatalogItemContext
reportContext, ClientRequest session, Warning[]& warnings,
ParameterInfoCollection& effectiveParameters, String[]& secondaryStreamNames)
-- End of inner exception stack trace --Hi,
Try setting MaxQueueThreads to 1 in the RSReportServer.config file. You may
want to escalate this to the RS PSS since it looks like a bug to me.
--
Hope this helps.
---
Teo Lachev, MVP [SQL Server], MCSD, MCT
Author: "Microsoft Reporting Services in Action"
Publisher website: http://www.manning.com/lachev
Buy it from Amazon.com: http://shrinkster.com/eq
Home page and blog: http://www.prologika.com/
---
"FurmanGG" <FurmanGG@.discussions.microsoft.com> wrote in message
news:516B456B-0E41-46AA-B775-D3BA1B13C1F0@.microsoft.com...
> I've got a report that can have about 50 possible parameter combinations.
The
> underlying query is fairly expensive, so I want to cache the dataset every
> morning after I expire yesterday's cache. So I set up a data-driven
> subscription to run every morning and execute all the parameter
combinations
> and deliver it to the Null delivery extension. The problem is that when
the
> scheduled time rolls around, it tries to run all 50 parameter combinations
at
> once. (Actually, it looks like it runs 2 or 3 combinations a second
without
> waiting for the first one to finish.) The query of the data sources run
fine,
> but I get a deadlock when Reporting Services tries to write the dataset to
> the cache. Somewhere between 10% and 50% of the 50 fail every day. The
logs
> show a standard deadlock error. The error is below.
> Is there any config file setting to tell RS to retry writing to the cache
a
> few times before giving up because of a deadlock? Is this possibly a SP2
fix?
> Or is there any way to tell Reporting Services to run the 50 executions
> serially instead of concurrently? Or do I need to investigate creating
some
> custom code to execute and cache the reports on schedule (so I can make it
> execute them serially)?
> Please let me know if you need more info... or if you need something from
> the SQL Server logs. Thanks in advance.
> --
> ReportingServicesService!library!2e72c!10/28/2004-21:25:21:: e ERROR:
> Throwing
>
Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException:
> An internal error occurred on the report server. See the error log for
more
> details., ;
> Info:
>
Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException:
> An internal error occurred on the report server. See the error log for
more
> details. --> System.Data.SqlClient.SqlException: Transaction (Process ID
> 132) was deadlocked on lock resources with another process and has been
> chosen as the deadlock victim. Rerun the transaction.
> at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior
> cmdBehavior, RunBehavior runBehavior, Boolean returnStream)
> at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
> at
>
Microsoft.ReportingServices.Library.InstrumentedSqlCommand.ExecuteNonQuery()
> at
>
Microsoft.ReportingServices.Library.DBInterface.AddReportToExecutionCache(Gu
id
> reportId, ReportSnapshot snapshotData, DateTime executionDateTime,
DateTime&
> expirationDateTime)
> at
>
Microsoft.ReportingServices.Library.RSService.RenderAsLiveOrSnapshot(Catalog
ItemContext
> reportContext, ClientRequest session, Warning[]& warnings,
> ParameterInfoCollection& effectiveParameters)
> at
>
Microsoft.ReportingServices.Library.RSService.RenderFirst(CatalogItemContext
> reportContext, ClientRequest session, Warning[]& warnings,
> ParameterInfoCollection& effectiveParameters, String[]&
secondaryStreamNames)
> -- End of inner exception stack trace --
>|||That did it. Thanks SO much for your response. Now the executions of each
individual parameter combination run serially and don't cause deadlocks.
I did some research on that config file setting you suggested, Teo. Here's
what I found:
MaxQueueThreads
Specifies the maximum number of threads dedicated to polling the event table
in the report server database.
Range: 0 to max integer.
The default is 0.
So if I understand correctly, this setting will ONLY affect the number of
threads used to process events like the fulfilling of subscriptions. Is that
correct? It will not impact performance of multiple users executing and
viewing reports on demand and at the same time, right?|||This is my understanding as well but verify it using the Performance
Monitor.
--
Hope this helps.
---
Teo Lachev, MVP [SQL Server], MCSD, MCT
Author: "Microsoft Reporting Services in Action"
Publisher website: http://www.manning.com/lachev
Buy it from Amazon.com: http://shrinkster.com/eq
Home page and blog: http://www.prologika.com/
---
"FurmanGG" <FurmanGG@.discussions.microsoft.com> wrote in message
news:C9EF39EE-1638-47E9-AE5F-24896700A73D@.microsoft.com...
> That did it. Thanks SO much for your response. Now the executions of each
> individual parameter combination run serially and don't cause deadlocks.
> I did some research on that config file setting you suggested, Teo. Here's
> what I found:
> MaxQueueThreads
> Specifies the maximum number of threads dedicated to polling the event
table
> in the report server database.
> Range: 0 to max integer.
> The default is 0.
> So if I understand correctly, this setting will ONLY affect the number of
> threads used to process events like the fulfilling of subscriptions. Is
that
> correct? It will not impact performance of multiple users executing and
> viewing reports on demand and at the same time, right?
Showing posts with label cache. Show all posts
Showing posts with label cache. Show all posts
Tuesday, March 20, 2012
Saturday, February 25, 2012
buffer pool only has data pages ?
Or does the buffer pool include procedure cache as well ?
I know in sql 2000, they have this mem to leave concept that I have heard is
no longer referred to in 2005. So whats it called in 2005?
ThanksCome on now Hassan this is getting beyond petty. Do you really expect
everyone to spoon feed you everything about SQL Server just because you
don't want to read a book or do a simple Google search. The answer to this
is certainly something you should already know by now but can be found in at
least 20 places with little trouble. Heck even BooksOnLine can answer that
question easily. 95% of the questions you ask here can be found in "Inside
SQL Server 2005" alone. Do everyone including yourself a favor and buy the
book and take a little time to read it. Or at least the sections that are
relevant to what you need to know at the time. You won't regret it.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Hassan" <hassan@.test.com> wrote in message
news:Om2ZjtcZIHA.4696@.TK2MSFTNGP05.phx.gbl...
> Or does the buffer pool include procedure cache as well ?
> I know in sql 2000, they have this mem to leave concept that I have heard
> is no longer referred to in 2005. So whats it called in 2005?
> Thanks|||Andrew J. Kelly wrote:
> Come on now Hassan this is getting beyond petty. Do you really expect
> everyone to spoon feed you everything about SQL Server just because
> you don't want to read a book or do a simple Google search. The answer
> to this is certainly something you should already know by now but can
> be found in at least 20 places with little trouble. Heck even
> BooksOnLine can answer that question easily. 95% of the questions you
> ask here can be found in "Inside SQL Server 2005" alone. Do everyone
> including yourself a favor and buy the book and take a little time to
> read it. Or at least the sections that are relevant to what you need
> to know at the time. You won't regret it.
>
And neither will we :)
I know in sql 2000, they have this mem to leave concept that I have heard is
no longer referred to in 2005. So whats it called in 2005?
ThanksCome on now Hassan this is getting beyond petty. Do you really expect
everyone to spoon feed you everything about SQL Server just because you
don't want to read a book or do a simple Google search. The answer to this
is certainly something you should already know by now but can be found in at
least 20 places with little trouble. Heck even BooksOnLine can answer that
question easily. 95% of the questions you ask here can be found in "Inside
SQL Server 2005" alone. Do everyone including yourself a favor and buy the
book and take a little time to read it. Or at least the sections that are
relevant to what you need to know at the time. You won't regret it.
--
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Hassan" <hassan@.test.com> wrote in message
news:Om2ZjtcZIHA.4696@.TK2MSFTNGP05.phx.gbl...
> Or does the buffer pool include procedure cache as well ?
> I know in sql 2000, they have this mem to leave concept that I have heard
> is no longer referred to in 2005. So whats it called in 2005?
> Thanks|||Andrew J. Kelly wrote:
> Come on now Hassan this is getting beyond petty. Do you really expect
> everyone to spoon feed you everything about SQL Server just because
> you don't want to read a book or do a simple Google search. The answer
> to this is certainly something you should already know by now but can
> be found in at least 20 places with little trouble. Heck even
> BooksOnLine can answer that question easily. 95% of the questions you
> ask here can be found in "Inside SQL Server 2005" alone. Do everyone
> including yourself a favor and buy the book and take a little time to
> read it. Or at least the sections that are relevant to what you need
> to know at the time. You won't regret it.
>
And neither will we :)
Buffer Manager on SQl server
I am monitoring my SQL Server. If the server has 2gig of
physical ram and SQL server memory is 1.6G and the SQL
Servver Buffer Manager cache hit ratio is 99, is that
good? If not, what configuration can be done? The server
is very slow at themoment. Thank you.
Jasmine>--Original Message--
>I am monitoring my SQL Server. If the server has 2gig of
>physical ram and SQL server memory is 1.6G and the SQL
>Servver Buffer Manager cache hit ratio is 99, is that
>good? If not, what configuration can be done? The
server
>is very slow at the moment. Thank you.
>Jasmine
Friends,
I just thought I should post this from sp_configure so you
can see how the memory is configured for sql server. It
is set to dynamically configure sql server memory - but
when I run performance monitoring it shows that sql server
consumes too much memory. Please take a look and let me
know if I need to reduce the amount of sql memory or some
key buffer and sort buffer configurations? Thank you very
much.
minimum maximum configure_value run_value
max degree of parallelism 0 32 0 0
max server memory (MB) 4 2147483647 2147483647
2147483647
max text repl size (B) 0 2147483647 65536
65536
max worker threads 32 32767 255 255
media retention 0 365 0 0
min memory per query (KB) 512 2147483647
1024 1024
min server memory (MB) 0 2147483647 0 0
>|||Those numbers are pretty good. SQL is using as much memory as is available
(2GB minus some for OS, overhead, etc. 1.6 is about right). 99% Buffer
cache hit ratio is also good. Until that gets under 95% or so, adding
memory likely won't give a noticable performance boost.
It looks like memory isn't your performance problem. Here is a good
starting point for finding out why your server isn't responding as well as
you need it to.
HOW TO: Troubleshoot Application Performance Issues
http://support.microsoft.com/default.aspx?scid=kb;en-us;298475&Product=sql2k
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Jasmine Quinlan" <anonymous@.discussions.microsoft.com> wrote in message
news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
> I am monitoring my SQL Server. If the server has 2gig of
> physical ram and SQL server memory is 1.6G and the SQL
> Servver Buffer Manager cache hit ratio is 99, is that
> good? If not, what configuration can be done? The server
> is very slow at themoment. Thank you.
> Jasmine
>|||Geoff,
Thank you so much for your help. I appreciate that.
Jasmine
>--Original Message--
>Those numbers are pretty good. SQL is using as much
memory as is available
>(2GB minus some for OS, overhead, etc. 1.6 is about
right). 99% Buffer
>cache hit ratio is also good. Until that gets under 95%
or so, adding
>memory likely won't give a noticable performance boost.
>It looks like memory isn't your performance problem.
Here is a good
>starting point for finding out why your server isn't
responding as well as
>you need it to.
>HOW TO: Troubleshoot Application Performance Issues
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;298475&Product=sql2k
>
>--
>Geoff N. Hiten
>Microsoft SQL Server MVP
>Senior Database Administrator
>Careerbuilder.com
>I support the Professional Association for SQL Server
>www.sqlpass.org
>"Jasmine Quinlan" <anonymous@.discussions.microsoft.com>
wrote in message
>news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
>> I am monitoring my SQL Server. If the server has 2gig
of
>> physical ram and SQL server memory is 1.6G and the SQL
>> Servver Buffer Manager cache hit ratio is 99, is that
>> good? If not, what configuration can be done? The
server
>> is very slow at themoment. Thank you.
>> Jasmine
>
>.
>
physical ram and SQL server memory is 1.6G and the SQL
Servver Buffer Manager cache hit ratio is 99, is that
good? If not, what configuration can be done? The server
is very slow at themoment. Thank you.
Jasmine>--Original Message--
>I am monitoring my SQL Server. If the server has 2gig of
>physical ram and SQL server memory is 1.6G and the SQL
>Servver Buffer Manager cache hit ratio is 99, is that
>good? If not, what configuration can be done? The
server
>is very slow at the moment. Thank you.
>Jasmine
Friends,
I just thought I should post this from sp_configure so you
can see how the memory is configured for sql server. It
is set to dynamically configure sql server memory - but
when I run performance monitoring it shows that sql server
consumes too much memory. Please take a look and let me
know if I need to reduce the amount of sql memory or some
key buffer and sort buffer configurations? Thank you very
much.
minimum maximum configure_value run_value
max degree of parallelism 0 32 0 0
max server memory (MB) 4 2147483647 2147483647
2147483647
max text repl size (B) 0 2147483647 65536
65536
max worker threads 32 32767 255 255
media retention 0 365 0 0
min memory per query (KB) 512 2147483647
1024 1024
min server memory (MB) 0 2147483647 0 0
>|||Those numbers are pretty good. SQL is using as much memory as is available
(2GB minus some for OS, overhead, etc. 1.6 is about right). 99% Buffer
cache hit ratio is also good. Until that gets under 95% or so, adding
memory likely won't give a noticable performance boost.
It looks like memory isn't your performance problem. Here is a good
starting point for finding out why your server isn't responding as well as
you need it to.
HOW TO: Troubleshoot Application Performance Issues
http://support.microsoft.com/default.aspx?scid=kb;en-us;298475&Product=sql2k
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Jasmine Quinlan" <anonymous@.discussions.microsoft.com> wrote in message
news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
> I am monitoring my SQL Server. If the server has 2gig of
> physical ram and SQL server memory is 1.6G and the SQL
> Servver Buffer Manager cache hit ratio is 99, is that
> good? If not, what configuration can be done? The server
> is very slow at themoment. Thank you.
> Jasmine
>|||Geoff,
Thank you so much for your help. I appreciate that.
Jasmine
>--Original Message--
>Those numbers are pretty good. SQL is using as much
memory as is available
>(2GB minus some for OS, overhead, etc. 1.6 is about
right). 99% Buffer
>cache hit ratio is also good. Until that gets under 95%
or so, adding
>memory likely won't give a noticable performance boost.
>It looks like memory isn't your performance problem.
Here is a good
>starting point for finding out why your server isn't
responding as well as
>you need it to.
>HOW TO: Troubleshoot Application Performance Issues
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;298475&Product=sql2k
>
>--
>Geoff N. Hiten
>Microsoft SQL Server MVP
>Senior Database Administrator
>Careerbuilder.com
>I support the Professional Association for SQL Server
>www.sqlpass.org
>"Jasmine Quinlan" <anonymous@.discussions.microsoft.com>
wrote in message
>news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
>> I am monitoring my SQL Server. If the server has 2gig
of
>> physical ram and SQL server memory is 1.6G and the SQL
>> Servver Buffer Manager cache hit ratio is 99, is that
>> good? If not, what configuration can be done? The
server
>> is very slow at themoment. Thank you.
>> Jasmine
>
>.
>
Buffer Manager on SQl server
I am monitoring my SQL Server. If the server has 2gig of
physical ram and SQL server memory is 1.6G and the SQL
Servver Buffer Manager cache hit ratio is 99, is that
good? If not, what configuration can be done? The server
is very slow at themoment. Thank you.
Jasmine
Those numbers are pretty good. SQL is using as much memory as is available
(2GB minus some for OS, overhead, etc. 1.6 is about right). 99% Buffer
cache hit ratio is also good. Until that gets under 95% or so, adding
memory likely won't give a noticable performance boost.
It looks like memory isn't your performance problem. Here is a good
starting point for finding out why your server isn't responding as well as
you need it to.
HOW TO: Troubleshoot Application Performance Issues
http://support.microsoft.com/default...&Product=sql2k
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Jasmine Quinlan" <anonymous@.discussions.microsoft.com> wrote in message
news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
> I am monitoring my SQL Server. If the server has 2gig of
> physical ram and SQL server memory is 1.6G and the SQL
> Servver Buffer Manager cache hit ratio is 99, is that
> good? If not, what configuration can be done? The server
> is very slow at themoment. Thank you.
> Jasmine
>
|||Geoff,
Thank you so much for your help. I appreciate that.
Jasmine
>--Original Message--
>Those numbers are pretty good. SQL is using as much
memory as is available
>(2GB minus some for OS, overhead, etc. 1.6 is about
right). 99% Buffer
>cache hit ratio is also good. Until that gets under 95%
or so, adding
>memory likely won't give a noticable performance boost.
>It looks like memory isn't your performance problem.
Here is a good
>starting point for finding out why your server isn't
responding as well as
>you need it to.
>HOW TO: Troubleshoot Application Performance Issues
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;298475&Product=sql2k
>
>--
>Geoff N. Hiten
>Microsoft SQL Server MVP
>Senior Database Administrator
>Careerbuilder.com
>I support the Professional Association for SQL Server
>www.sqlpass.org
>"Jasmine Quinlan" <anonymous@.discussions.microsoft.com>
wrote in message[vbcol=seagreen]
>news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
of[vbcol=seagreen]
server
>
>.
>
physical ram and SQL server memory is 1.6G and the SQL
Servver Buffer Manager cache hit ratio is 99, is that
good? If not, what configuration can be done? The server
is very slow at themoment. Thank you.
Jasmine
Those numbers are pretty good. SQL is using as much memory as is available
(2GB minus some for OS, overhead, etc. 1.6 is about right). 99% Buffer
cache hit ratio is also good. Until that gets under 95% or so, adding
memory likely won't give a noticable performance boost.
It looks like memory isn't your performance problem. Here is a good
starting point for finding out why your server isn't responding as well as
you need it to.
HOW TO: Troubleshoot Application Performance Issues
http://support.microsoft.com/default...&Product=sql2k
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Jasmine Quinlan" <anonymous@.discussions.microsoft.com> wrote in message
news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
> I am monitoring my SQL Server. If the server has 2gig of
> physical ram and SQL server memory is 1.6G and the SQL
> Servver Buffer Manager cache hit ratio is 99, is that
> good? If not, what configuration can be done? The server
> is very slow at themoment. Thank you.
> Jasmine
>
|||Geoff,
Thank you so much for your help. I appreciate that.
Jasmine
>--Original Message--
>Those numbers are pretty good. SQL is using as much
memory as is available
>(2GB minus some for OS, overhead, etc. 1.6 is about
right). 99% Buffer
>cache hit ratio is also good. Until that gets under 95%
or so, adding
>memory likely won't give a noticable performance boost.
>It looks like memory isn't your performance problem.
Here is a good
>starting point for finding out why your server isn't
responding as well as
>you need it to.
>HOW TO: Troubleshoot Application Performance Issues
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;298475&Product=sql2k
>
>--
>Geoff N. Hiten
>Microsoft SQL Server MVP
>Senior Database Administrator
>Careerbuilder.com
>I support the Professional Association for SQL Server
>www.sqlpass.org
>"Jasmine Quinlan" <anonymous@.discussions.microsoft.com>
wrote in message[vbcol=seagreen]
>news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
of[vbcol=seagreen]
server
>
>.
>
Buffer Manager on SQl server
I am monitoring my SQL Server. If the server has 2gig of
physical ram and SQL server memory is 1.6G and the SQL
Servver Buffer Manager cache hit ratio is 99, is that
good? If not, what configuration can be done? The server
is very slow at themoment. Thank you.
JasmineThose numbers are pretty good. SQL is using as much memory as is available
(2GB minus some for OS, overhead, etc. 1.6 is about right). 99% Buffer
cache hit ratio is also good. Until that gets under 95% or so, adding
memory likely won't give a noticable performance boost.
It looks like memory isn't your performance problem. Here is a good
starting point for finding out why your server isn't responding as well as
you need it to.
HOW TO: Troubleshoot Application Performance Issues
http://support.microsoft.com/defaul...5&Product=sql2k
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Jasmine Quinlan" <anonymous@.discussions.microsoft.com> wrote in message
news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
> I am monitoring my SQL Server. If the server has 2gig of
> physical ram and SQL server memory is 1.6G and the SQL
> Servver Buffer Manager cache hit ratio is 99, is that
> good? If not, what configuration can be done? The server
> is very slow at themoment. Thank you.
> Jasmine
>|||Geoff,
Thank you so much for your help. I appreciate that.
Jasmine
>--Original Message--
>Those numbers are pretty good. SQL is using as much
memory as is available
>(2GB minus some for OS, overhead, etc. 1.6 is about
right). 99% Buffer
>cache hit ratio is also good. Until that gets under 95%
or so, adding
>memory likely won't give a noticable performance boost.
>It looks like memory isn't your performance problem.
Here is a good
>starting point for finding out why your server isn't
responding as well as
>you need it to.
>HOW TO: Troubleshoot Application Performance Issues
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;298475&Product=sql2k
>
>--
>Geoff N. Hiten
>Microsoft SQL Server MVP
>Senior Database Administrator
>Careerbuilder.com
>I support the Professional Association for SQL Server
>www.sqlpass.org
>"Jasmine Quinlan" <anonymous@.discussions.microsoft.com>
wrote in message
>news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
of[vbcol=seagreen]
server[vbcol=seagreen]
>
>.
>
physical ram and SQL server memory is 1.6G and the SQL
Servver Buffer Manager cache hit ratio is 99, is that
good? If not, what configuration can be done? The server
is very slow at themoment. Thank you.
JasmineThose numbers are pretty good. SQL is using as much memory as is available
(2GB minus some for OS, overhead, etc. 1.6 is about right). 99% Buffer
cache hit ratio is also good. Until that gets under 95% or so, adding
memory likely won't give a noticable performance boost.
It looks like memory isn't your performance problem. Here is a good
starting point for finding out why your server isn't responding as well as
you need it to.
HOW TO: Troubleshoot Application Performance Issues
http://support.microsoft.com/defaul...5&Product=sql2k
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"Jasmine Quinlan" <anonymous@.discussions.microsoft.com> wrote in message
news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
> I am monitoring my SQL Server. If the server has 2gig of
> physical ram and SQL server memory is 1.6G and the SQL
> Servver Buffer Manager cache hit ratio is 99, is that
> good? If not, what configuration can be done? The server
> is very slow at themoment. Thank you.
> Jasmine
>|||Geoff,
Thank you so much for your help. I appreciate that.
Jasmine
>--Original Message--
>Those numbers are pretty good. SQL is using as much
memory as is available
>(2GB minus some for OS, overhead, etc. 1.6 is about
right). 99% Buffer
>cache hit ratio is also good. Until that gets under 95%
or so, adding
>memory likely won't give a noticable performance boost.
>It looks like memory isn't your performance problem.
Here is a good
>starting point for finding out why your server isn't
responding as well as
>you need it to.
>HOW TO: Troubleshoot Application Performance Issues
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;298475&Product=sql2k
>
>--
>Geoff N. Hiten
>Microsoft SQL Server MVP
>Senior Database Administrator
>Careerbuilder.com
>I support the Professional Association for SQL Server
>www.sqlpass.org
>"Jasmine Quinlan" <anonymous@.discussions.microsoft.com>
wrote in message
>news:f64701c43dd2$ff0f0f20$a001280a@.phx.gbl...
of[vbcol=seagreen]
server[vbcol=seagreen]
>
>.
>
Buffer Cache Hit Ration on a Reporting DB
Hi all.
First off, I'm a sys admin, not a DBA.
I'm trying to improve buffer cache hit ratio on a reporting database.
Database is approximately 130GB. Current server is a Proliant ML530 G2,
2-way 3.6Ghz, 8GB RAM. Multiple datafiles spread out across 4 separate
raid 5 arrays. Disk queuing is pretty good. Sometimes we hit a rough
day.
I've read over and over how BCHR needs to be above 90%, optimally
around 98-99%. On the live OLTP database, BCHR is fine with 8GB in the
system. On the reporting DB, it often drops to 80% or below.
My question is, should a reporting database have this large of a
difference in the hit ratio?
I'm working with our developer & DBA to improve the reports, but I also
need to make sure that the hardware is performing as good as it can.
I've got a DL585, 4-core Opteron in the wings with 16GB running Win2K3
x64. We're waiting to make sure SP4 is completely kosher with our
application before upgrading.
Thoughts, ideas? Much appreciated!
-AJ<ajohnson@.echecktrac.com> wrote in message
news:1144442893.581038.299300@.e56g2000cwe.googlegroups.com...
> Hi all.
> First off, I'm a sys admin, not a DBA.
> I'm trying to improve buffer cache hit ratio on a reporting database.
> Database is approximately 130GB. Current server is a Proliant ML530 G2,
> 2-way 3.6Ghz, 8GB RAM. Multiple datafiles spread out across 4 separate
> raid 5 arrays. Disk queuing is pretty good. Sometimes we hit a rough
> day.
> I've read over and over how BCHR needs to be above 90%, optimally
> around 98-99%. On the live OLTP database, BCHR is fine with 8GB in the
> system.
Unfortunatly BCHR on OLTP databases is an irrelevant metric. A high BCHR
often results from inefficient queries reading lots and lots of pages in the
cache. And the memory size/database size ratio of an OLTP system often
doesn't even permit a low cache hit ratio.
>On the reporting DB, it often drops to 80% or below.
> My question is, should a reporting database have this large of a
> difference in the hit ratio?
Yes, it will quite likely be lower. It's a matter of the ratio between the
cache (<8GB) and the data (130GB), and the predictablility of the queries
and acess paths. Better indexing can help improve performance, although
since indexing reduces total IO, not just physical IO, you might not see a
better BCHR.
> I'm working with our developer & DBA to improve the reports, but I also
> need to make sure that the hardware is performing as good as it can.
> I've got a DL585, 4-core Opteron in the wings with 16GB running Win2K3
> x64. We're waiting to make sure SP4 is completely kosher with our
> application before upgrading.
>
Going all the way to SQL 2005 64-bit will help alot more. You can use all
that memory much more efficiently, plus it's easier to monitor the
performance and find the expensive queries.
David
First off, I'm a sys admin, not a DBA.
I'm trying to improve buffer cache hit ratio on a reporting database.
Database is approximately 130GB. Current server is a Proliant ML530 G2,
2-way 3.6Ghz, 8GB RAM. Multiple datafiles spread out across 4 separate
raid 5 arrays. Disk queuing is pretty good. Sometimes we hit a rough
day.
I've read over and over how BCHR needs to be above 90%, optimally
around 98-99%. On the live OLTP database, BCHR is fine with 8GB in the
system. On the reporting DB, it often drops to 80% or below.
My question is, should a reporting database have this large of a
difference in the hit ratio?
I'm working with our developer & DBA to improve the reports, but I also
need to make sure that the hardware is performing as good as it can.
I've got a DL585, 4-core Opteron in the wings with 16GB running Win2K3
x64. We're waiting to make sure SP4 is completely kosher with our
application before upgrading.
Thoughts, ideas? Much appreciated!
-AJ<ajohnson@.echecktrac.com> wrote in message
news:1144442893.581038.299300@.e56g2000cwe.googlegroups.com...
> Hi all.
> First off, I'm a sys admin, not a DBA.
> I'm trying to improve buffer cache hit ratio on a reporting database.
> Database is approximately 130GB. Current server is a Proliant ML530 G2,
> 2-way 3.6Ghz, 8GB RAM. Multiple datafiles spread out across 4 separate
> raid 5 arrays. Disk queuing is pretty good. Sometimes we hit a rough
> day.
> I've read over and over how BCHR needs to be above 90%, optimally
> around 98-99%. On the live OLTP database, BCHR is fine with 8GB in the
> system.
Unfortunatly BCHR on OLTP databases is an irrelevant metric. A high BCHR
often results from inefficient queries reading lots and lots of pages in the
cache. And the memory size/database size ratio of an OLTP system often
doesn't even permit a low cache hit ratio.
>On the reporting DB, it often drops to 80% or below.
> My question is, should a reporting database have this large of a
> difference in the hit ratio?
Yes, it will quite likely be lower. It's a matter of the ratio between the
cache (<8GB) and the data (130GB), and the predictablility of the queries
and acess paths. Better indexing can help improve performance, although
since indexing reduces total IO, not just physical IO, you might not see a
better BCHR.
> I'm working with our developer & DBA to improve the reports, but I also
> need to make sure that the hardware is performing as good as it can.
> I've got a DL585, 4-core Opteron in the wings with 16GB running Win2K3
> x64. We're waiting to make sure SP4 is completely kosher with our
> application before upgrading.
>
Going all the way to SQL 2005 64-bit will help alot more. You can use all
that memory much more efficiently, plus it's easier to monitor the
performance and find the expensive queries.
David
Buffer Cache Hit Ration on a Reporting DB
Hi all.
First off, I'm a sys admin, not a DBA.
I'm trying to improve buffer cache hit ratio on a reporting database.
Database is approximately 130GB. Current server is a Proliant ML530 G2,
2-way 3.6Ghz, 8GB RAM. Multiple datafiles spread out across 4 separate
raid 5 arrays. Disk queuing is pretty good. Sometimes we hit a rough
day.
I've read over and over how BCHR needs to be above 90%, optimally
around 98-99%. On the live OLTP database, BCHR is fine with 8GB in the
system. On the reporting DB, it often drops to 80% or below.
My question is, should a reporting database have this large of a
difference in the hit ratio?
I'm working with our developer & DBA to improve the reports, but I also
need to make sure that the hardware is performing as good as it can.
I've got a DL585, 4-core Opteron in the wings with 16GB running Win2K3
x64. We're waiting to make sure SP4 is completely kosher with our
application before upgrading.
Thoughts, ideas? Much appreciated!
-AJ<ajohnson@.echecktrac.com> wrote in message
news:1144442893.581038.299300@.e56g2000cwe.googlegroups.com...
> Hi all.
> First off, I'm a sys admin, not a DBA.
> I'm trying to improve buffer cache hit ratio on a reporting database.
> Database is approximately 130GB. Current server is a Proliant ML530 G2,
> 2-way 3.6Ghz, 8GB RAM. Multiple datafiles spread out across 4 separate
> raid 5 arrays. Disk queuing is pretty good. Sometimes we hit a rough
> day.
> I've read over and over how BCHR needs to be above 90%, optimally
> around 98-99%. On the live OLTP database, BCHR is fine with 8GB in the
> system.
Unfortunatly BCHR on OLTP databases is an irrelevant metric. A high BCHR
often results from inefficient queries reading lots and lots of pages in the
cache. And the memory size/database size ratio of an OLTP system often
doesn't even permit a low cache hit ratio.
>On the reporting DB, it often drops to 80% or below.
> My question is, should a reporting database have this large of a
> difference in the hit ratio?
Yes, it will quite likely be lower. It's a matter of the ratio between the
cache (<8GB) and the data (130GB), and the predictablility of the queries
and acess paths. Better indexing can help improve performance, although
since indexing reduces total IO, not just physical IO, you might not see a
better BCHR.
> I'm working with our developer & DBA to improve the reports, but I also
> need to make sure that the hardware is performing as good as it can.
> I've got a DL585, 4-core Opteron in the wings with 16GB running Win2K3
> x64. We're waiting to make sure SP4 is completely kosher with our
> application before upgrading.
>
Going all the way to SQL 2005 64-bit will help alot more. You can use all
that memory much more efficiently, plus it's easier to monitor the
performance and find the expensive queries.
David
First off, I'm a sys admin, not a DBA.
I'm trying to improve buffer cache hit ratio on a reporting database.
Database is approximately 130GB. Current server is a Proliant ML530 G2,
2-way 3.6Ghz, 8GB RAM. Multiple datafiles spread out across 4 separate
raid 5 arrays. Disk queuing is pretty good. Sometimes we hit a rough
day.
I've read over and over how BCHR needs to be above 90%, optimally
around 98-99%. On the live OLTP database, BCHR is fine with 8GB in the
system. On the reporting DB, it often drops to 80% or below.
My question is, should a reporting database have this large of a
difference in the hit ratio?
I'm working with our developer & DBA to improve the reports, but I also
need to make sure that the hardware is performing as good as it can.
I've got a DL585, 4-core Opteron in the wings with 16GB running Win2K3
x64. We're waiting to make sure SP4 is completely kosher with our
application before upgrading.
Thoughts, ideas? Much appreciated!
-AJ<ajohnson@.echecktrac.com> wrote in message
news:1144442893.581038.299300@.e56g2000cwe.googlegroups.com...
> Hi all.
> First off, I'm a sys admin, not a DBA.
> I'm trying to improve buffer cache hit ratio on a reporting database.
> Database is approximately 130GB. Current server is a Proliant ML530 G2,
> 2-way 3.6Ghz, 8GB RAM. Multiple datafiles spread out across 4 separate
> raid 5 arrays. Disk queuing is pretty good. Sometimes we hit a rough
> day.
> I've read over and over how BCHR needs to be above 90%, optimally
> around 98-99%. On the live OLTP database, BCHR is fine with 8GB in the
> system.
Unfortunatly BCHR on OLTP databases is an irrelevant metric. A high BCHR
often results from inefficient queries reading lots and lots of pages in the
cache. And the memory size/database size ratio of an OLTP system often
doesn't even permit a low cache hit ratio.
>On the reporting DB, it often drops to 80% or below.
> My question is, should a reporting database have this large of a
> difference in the hit ratio?
Yes, it will quite likely be lower. It's a matter of the ratio between the
cache (<8GB) and the data (130GB), and the predictablility of the queries
and acess paths. Better indexing can help improve performance, although
since indexing reduces total IO, not just physical IO, you might not see a
better BCHR.
> I'm working with our developer & DBA to improve the reports, but I also
> need to make sure that the hardware is performing as good as it can.
> I've got a DL585, 4-core Opteron in the wings with 16GB running Win2K3
> x64. We're waiting to make sure SP4 is completely kosher with our
> application before upgrading.
>
Going all the way to SQL 2005 64-bit will help alot more. You can use all
that memory much more efficiently, plus it's easier to monitor the
performance and find the expensive queries.
David
buffer cache hit ratio...how to measure?
Does anyone know how to measure the buffer cache hit ratio? I have been reading a lot about it but can't find this measurement in Performance Monitor.
ThanksNevermind, I found the counter. For anyone else who may be interested, it is in the performance monitor under SQL Server: Buffer Manager.
Thanks
ThanksNevermind, I found the counter. For anyone else who may be interested, it is in the performance monitor under SQL Server: Buffer Manager.
Thanks
Buffer Cache Hit Ratio vs Memory Manager Cache Hit ratio
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.Hi
You would need Enterprise Edition of Windows and SQL Server if you want to
use all 8GB.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Jeffrey K. Ericson" wrote:
> We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
> trying ascertain whether or not upgrading to a OS that will see the full
> monty will help or redploying the assets would be a better option
> The Buffer Cache Hit Ratio - 99%
> The Memory Manager Cache Hit Ratio = 77%
> The Target and actual values for Total Memory and Buffer Pages are the same
> The memory pages/sec swings wildly, generally below 1 it goes up into the
> thousands.
> What is the difference between the two Cache Hit Ratio objects
> Note: Vendor supplied DB is horrendous in design.|||Yes, we have EE of SQL running on 2k standard. We are contemplaing going to
a 2003 version that can support the 8GB.
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> You would need Enterprise Edition of Windows and SQL Server if you want to
> use all 8GB.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Jeffrey K. Ericson" wrote:
> > We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
> > trying ascertain whether or not upgrading to a OS that will see the full
> > monty will help or redploying the assets would be a better option
> >
> > The Buffer Cache Hit Ratio - 99%
> > The Memory Manager Cache Hit Ratio = 77%
> > The Target and actual values for Total Memory and Buffer Pages are the same
> > The memory pages/sec swings wildly, generally below 1 it goes up into the
> > thousands.
> >
> > What is the difference between the two Cache Hit Ratio objects
> > Note: Vendor supplied DB is horrendous in design.|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/default.aspx?scid=kb;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.|||Do u have the PAE switch and AWE switch enabled ?
If you have done the required configuration, I dont see how the OS is not
able to see
beyond 3GB.
Thanks,
Gopi
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
>> Hi
>> You would need Enterprise Edition of Windows and SQL Server if you want
>> to
>> use all 8GB.
>> Regards
>> --
>> Mike Epprecht, Microsoft SQL Server MVP
>> Zurich, Switzerland
>> MVP Program: http://www.microsoft.com/mvp
>> Blog: http://www.msmvps.com/epprecht/
>>
>> "Jeffrey K. Ericson" wrote:
>> > We have a box that has 8GB of RAM but the OS only lets us see 3GB. We
>> > are
>> > trying ascertain whether or not upgrading to a OS that will see the
>> > full
>> > monty will help or redploying the assets would be a better option
>> >
>> > The Buffer Cache Hit Ratio - 99%
>> > The Memory Manager Cache Hit Ratio = 77%
>> > The Target and actual values for Total Memory and Buffer Pages are the
>> > same
>> > The memory pages/sec swings wildly, generally below 1 it goes up into
>> > the
>> > thousands.
>> >
>> > What is the difference between the two Cache Hit Ratio objects
>> > Note: Vendor supplied DB is horrendous in design.|||Found this in another Post. Have you applied SP4 ?
See the link below for the KB article on this issue. Microsoft are still
working on the problem.
--
Nick Colebourn (MCDBA)
DBA
United Coop Ltd
"Dimitar Dimitrov" wrote:
> The configuration is:
> OS: Windows 2003 Enterprise Edition
> RAM: 6GB
> boot.ini switches : /fastdetect /PAE /3GB /USERVA=3030
> SQL 2000 Enterprise Edition SP3a (awe enabled; min server memory (MB) => max server memory (MB) = 5120)
> Before installing SP4 SQL server uses configured amount of memory
> (5120MB).
> After installing SP4 it doesn't utilize more than 3GB of memory the SQL
> Server Logs reports "Warning: unable to allocate 'min server memory' of
> 5120MB."
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
>> Hi
>> You would need Enterprise Edition of Windows and SQL Server if you want
>> to
>> use all 8GB.
>> Regards
>> --
>> Mike Epprecht, Microsoft SQL Server MVP
>> Zurich, Switzerland
>> MVP Program: http://www.microsoft.com/mvp
>> Blog: http://www.msmvps.com/epprecht/
>>
>> "Jeffrey K. Ericson" wrote:
>> > We have a box that has 8GB of RAM but the OS only lets us see 3GB. We
>> > are
>> > trying ascertain whether or not upgrading to a OS that will see the
>> > full
>> > monty will help or redploying the assets would be a better option
>> >
>> > The Buffer Cache Hit Ratio - 99%
>> > The Memory Manager Cache Hit Ratio = 77%
>> > The Target and actual values for Total Memory and Buffer Pages are the
>> > same
>> > The memory pages/sec swings wildly, generally below 1 it goes up into
>> > the
>> > thousands.
>> >
>> > What is the difference between the two Cache Hit Ratio objects
>> > Note: Vendor supplied DB is horrendous in design.|||"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that :) The only headache could be hardware
and/or drivers not compatible with AWE.
Best regards
Wojtek|||How many transactions per second and/or batch requests per second does your
installation produce?
Ours pushes nearly 1500 hundred of both/either quite often. This coupled
with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
space quite heavily.
We've been piloting a 64-bit system for a while now and are getting ready to
make the conversion. Dropping AWE altogether and going back to Dynamic
Memory management for all of the memory managers including the PROC CACHE
was probably the most compelling reason for the push to convert.
Glad to hear that you haven't begun to hit those limits yet and that it is
still working out for your installation(s).
Sincerely,
Anthony Thomas
"Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that :) The only headache could be hardware
and/or drivers not compatible with AWE.
Best regards
Wojtek|||The profile of our system is very different indeed. We have quite a lot of
data, 200+GB, biggest table has about 80mln records, and it is mostly
static. In this scenario, AWE, /pae and /3gb seem to work just fine. But now
I do see what you mean. AFAIK, AWE memory can be used only by data cache, so
it's pretty much limited indeed. Thanks for the info!
Regards
Wojtek
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:%23Wl4GcIaFHA.2076@.TK2MSFTNGP15.phx.gbl...
> How many transactions per second and/or batch requests per second does
> your
> installation produce?
> Ours pushes nearly 1500 hundred of both/either quite often. This coupled
> with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
> space quite heavily.
> We've been piloting a 64-bit system for a while now and are getting ready
> to
> make the conversion. Dropping AWE altogether and going back to Dynamic
> Memory management for all of the memory managers including the PROC CACHE
> was probably the most compelling reason for the push to convert.
> Glad to hear that you haven't begun to hit those limits yet and that it is
> still working out for your installation(s).
> Sincerely,
>
> Anthony Thomas
>
> --
> "Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
> news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
> "Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
> news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
>> Those are good questions; however, you will find that using Win2K AS or
>> Win2K3 EE not much different, but although AWE will momentarily ease
>> memory
>> pressures, the added headaches it introduces will make the upgrades
>> short-lived.
> What headaches do you mean? We're using AWE for a few years, and I have
> not
> had any headaches because of that :) The only headache could be hardware
> and/or drivers not compatible with AWE.
> Best regards
> Wojtek
>
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.Hi
You would need Enterprise Edition of Windows and SQL Server if you want to
use all 8GB.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Jeffrey K. Ericson" wrote:
> We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
> trying ascertain whether or not upgrading to a OS that will see the full
> monty will help or redploying the assets would be a better option
> The Buffer Cache Hit Ratio - 99%
> The Memory Manager Cache Hit Ratio = 77%
> The Target and actual values for Total Memory and Buffer Pages are the same
> The memory pages/sec swings wildly, generally below 1 it goes up into the
> thousands.
> What is the difference between the two Cache Hit Ratio objects
> Note: Vendor supplied DB is horrendous in design.|||Yes, we have EE of SQL running on 2k standard. We are contemplaing going to
a 2003 version that can support the 8GB.
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> You would need Enterprise Edition of Windows and SQL Server if you want to
> use all 8GB.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Jeffrey K. Ericson" wrote:
> > We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
> > trying ascertain whether or not upgrading to a OS that will see the full
> > monty will help or redploying the assets would be a better option
> >
> > The Buffer Cache Hit Ratio - 99%
> > The Memory Manager Cache Hit Ratio = 77%
> > The Target and actual values for Total Memory and Buffer Pages are the same
> > The memory pages/sec swings wildly, generally below 1 it goes up into the
> > thousands.
> >
> > What is the difference between the two Cache Hit Ratio objects
> > Note: Vendor supplied DB is horrendous in design.|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/default.aspx?scid=kb;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.|||Do u have the PAE switch and AWE switch enabled ?
If you have done the required configuration, I dont see how the OS is not
able to see
beyond 3GB.
Thanks,
Gopi
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
>> Hi
>> You would need Enterprise Edition of Windows and SQL Server if you want
>> to
>> use all 8GB.
>> Regards
>> --
>> Mike Epprecht, Microsoft SQL Server MVP
>> Zurich, Switzerland
>> MVP Program: http://www.microsoft.com/mvp
>> Blog: http://www.msmvps.com/epprecht/
>>
>> "Jeffrey K. Ericson" wrote:
>> > We have a box that has 8GB of RAM but the OS only lets us see 3GB. We
>> > are
>> > trying ascertain whether or not upgrading to a OS that will see the
>> > full
>> > monty will help or redploying the assets would be a better option
>> >
>> > The Buffer Cache Hit Ratio - 99%
>> > The Memory Manager Cache Hit Ratio = 77%
>> > The Target and actual values for Total Memory and Buffer Pages are the
>> > same
>> > The memory pages/sec swings wildly, generally below 1 it goes up into
>> > the
>> > thousands.
>> >
>> > What is the difference between the two Cache Hit Ratio objects
>> > Note: Vendor supplied DB is horrendous in design.|||Found this in another Post. Have you applied SP4 ?
See the link below for the KB article on this issue. Microsoft are still
working on the problem.
--
Nick Colebourn (MCDBA)
DBA
United Coop Ltd
"Dimitar Dimitrov" wrote:
> The configuration is:
> OS: Windows 2003 Enterprise Edition
> RAM: 6GB
> boot.ini switches : /fastdetect /PAE /3GB /USERVA=3030
> SQL 2000 Enterprise Edition SP3a (awe enabled; min server memory (MB) => max server memory (MB) = 5120)
> Before installing SP4 SQL server uses configured amount of memory
> (5120MB).
> After installing SP4 it doesn't utilize more than 3GB of memory the SQL
> Server Logs reports "Warning: unable to allocate 'min server memory' of
> 5120MB."
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
>> Hi
>> You would need Enterprise Edition of Windows and SQL Server if you want
>> to
>> use all 8GB.
>> Regards
>> --
>> Mike Epprecht, Microsoft SQL Server MVP
>> Zurich, Switzerland
>> MVP Program: http://www.microsoft.com/mvp
>> Blog: http://www.msmvps.com/epprecht/
>>
>> "Jeffrey K. Ericson" wrote:
>> > We have a box that has 8GB of RAM but the OS only lets us see 3GB. We
>> > are
>> > trying ascertain whether or not upgrading to a OS that will see the
>> > full
>> > monty will help or redploying the assets would be a better option
>> >
>> > The Buffer Cache Hit Ratio - 99%
>> > The Memory Manager Cache Hit Ratio = 77%
>> > The Target and actual values for Total Memory and Buffer Pages are the
>> > same
>> > The memory pages/sec swings wildly, generally below 1 it goes up into
>> > the
>> > thousands.
>> >
>> > What is the difference between the two Cache Hit Ratio objects
>> > Note: Vendor supplied DB is horrendous in design.|||"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that :) The only headache could be hardware
and/or drivers not compatible with AWE.
Best regards
Wojtek|||How many transactions per second and/or batch requests per second does your
installation produce?
Ours pushes nearly 1500 hundred of both/either quite often. This coupled
with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
space quite heavily.
We've been piloting a 64-bit system for a while now and are getting ready to
make the conversion. Dropping AWE altogether and going back to Dynamic
Memory management for all of the memory managers including the PROC CACHE
was probably the most compelling reason for the push to convert.
Glad to hear that you haven't begun to hit those limits yet and that it is
still working out for your installation(s).
Sincerely,
Anthony Thomas
"Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that :) The only headache could be hardware
and/or drivers not compatible with AWE.
Best regards
Wojtek|||The profile of our system is very different indeed. We have quite a lot of
data, 200+GB, biggest table has about 80mln records, and it is mostly
static. In this scenario, AWE, /pae and /3gb seem to work just fine. But now
I do see what you mean. AFAIK, AWE memory can be used only by data cache, so
it's pretty much limited indeed. Thanks for the info!
Regards
Wojtek
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:%23Wl4GcIaFHA.2076@.TK2MSFTNGP15.phx.gbl...
> How many transactions per second and/or batch requests per second does
> your
> installation produce?
> Ours pushes nearly 1500 hundred of both/either quite often. This coupled
> with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
> space quite heavily.
> We've been piloting a 64-bit system for a while now and are getting ready
> to
> make the conversion. Dropping AWE altogether and going back to Dynamic
> Memory management for all of the memory managers including the PROC CACHE
> was probably the most compelling reason for the push to convert.
> Glad to hear that you haven't begun to hit those limits yet and that it is
> still working out for your installation(s).
> Sincerely,
>
> Anthony Thomas
>
> --
> "Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
> news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
> "Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
> news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
>> Those are good questions; however, you will find that using Win2K AS or
>> Win2K3 EE not much different, but although AWE will momentarily ease
>> memory
>> pressures, the added headaches it introduces will make the upgrades
>> short-lived.
> What headaches do you mean? We're using AWE for a few years, and I have
> not
> had any headaches because of that :) The only headache could be hardware
> and/or drivers not compatible with AWE.
> Best regards
> Wojtek
>
Buffer Cache Hit Ratio vs Memory Manager Cache Hit ratio
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.
Hi
You would need Enterprise Edition of Windows and SQL Server if you want to
use all 8GB.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Jeffrey K. Ericson" wrote:
> We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
> trying ascertain whether or not upgrading to a OS that will see the full
> monty will help or redploying the assets would be a better option
> The Buffer Cache Hit Ratio - 99%
> The Memory Manager Cache Hit Ratio = 77%
> The Target and actual values for Total Memory and Buffer Pages are the same
> The memory pages/sec swings wildly, generally below 1 it goes up into the
> thousands.
> What is the difference between the two Cache Hit Ratio objects
> Note: Vendor supplied DB is horrendous in design.
|||Yes, we have EE of SQL running on 2k standard. We are contemplaing going to
a 2003 version that can support the 8GB.
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> You would need Enterprise Edition of Windows and SQL Server if you want to
> use all 8GB.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Jeffrey K. Ericson" wrote:
|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/default...b;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.
|||Do u have the PAE switch and AWE switch enabled ?
If you have done the required configuration, I dont see how the OS is not
able to see
beyond 3GB.
Thanks,
Gopi
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...[vbcol=seagreen]
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
|||Found this in another Post. Have you applied SP4 ?
See the link below for the KB article on this issue. Microsoft are still
working on the problem.
Nick Colebourn (MCDBA)
DBA
United Coop Ltd
"Dimitar Dimitrov" wrote:
> The configuration is:
> OS: Windows 2003 Enterprise Edition
> RAM: 6GB
> boot.ini switches : /fastdetect /PAE /3GB /USERVA=3030
> SQL 2000 Enterprise Edition SP3a (awe enabled; min server memory (MB) =
> max server memory (MB) = 5120)
> Before installing SP4 SQL server uses configured amount of memory
> (5120MB).
> After installing SP4 it doesn't utilize more than 3GB of memory the SQL
> Server Logs reports "Warning: unable to allocate 'min server memory' of
> 5120MB."
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...[vbcol=seagreen]
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/default...b;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.
|||"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that
The only headache could be hardware
and/or drivers not compatible with AWE.
Best regards
Wojtek
|||How many transactions per second and/or batch requests per second does your
installation produce?
Ours pushes nearly 1500 hundred of both/either quite often. This coupled
with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
space quite heavily.
We've been piloting a 64-bit system for a while now and are getting ready to
make the conversion. Dropping AWE altogether and going back to Dynamic
Memory management for all of the memory managers including the PROC CACHE
was probably the most compelling reason for the push to convert.
Glad to hear that you haven't begun to hit those limits yet and that it is
still working out for your installation(s).
Sincerely,
Anthony Thomas
"Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that
The only headache could be hardware
and/or drivers not compatible with AWE.
Best regards
Wojtek
|||The profile of our system is very different indeed. We have quite a lot of
data, 200+GB, biggest table has about 80mln records, and it is mostly
static. In this scenario, AWE, /pae and /3gb seem to work just fine. But now
I do see what you mean. AFAIK, AWE memory can be used only by data cache, so
it's pretty much limited indeed. Thanks for the info!
Regards
Wojtek
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:%23Wl4GcIaFHA.2076@.TK2MSFTNGP15.phx.gbl...
> How many transactions per second and/or batch requests per second does
> your
> installation produce?
> Ours pushes nearly 1500 hundred of both/either quite often. This coupled
> with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
> space quite heavily.
> We've been piloting a 64-bit system for a while now and are getting ready
> to
> make the conversion. Dropping AWE altogether and going back to Dynamic
> Memory management for all of the memory managers including the PROC CACHE
> was probably the most compelling reason for the push to convert.
> Glad to hear that you haven't begun to hit those limits yet and that it is
> still working out for your installation(s).
> Sincerely,
>
> Anthony Thomas
>
> --
> "Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
> news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
> "Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
> news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> What headaches do you mean? We're using AWE for a few years, and I have
> not
> had any headaches because of that
The only headache could be hardware
> and/or drivers not compatible with AWE.
> Best regards
> Wojtek
>
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.
Hi
You would need Enterprise Edition of Windows and SQL Server if you want to
use all 8GB.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Jeffrey K. Ericson" wrote:
> We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
> trying ascertain whether or not upgrading to a OS that will see the full
> monty will help or redploying the assets would be a better option
> The Buffer Cache Hit Ratio - 99%
> The Memory Manager Cache Hit Ratio = 77%
> The Target and actual values for Total Memory and Buffer Pages are the same
> The memory pages/sec swings wildly, generally below 1 it goes up into the
> thousands.
> What is the difference between the two Cache Hit Ratio objects
> Note: Vendor supplied DB is horrendous in design.
|||Yes, we have EE of SQL running on 2k standard. We are contemplaing going to
a 2003 version that can support the 8GB.
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> You would need Enterprise Edition of Windows and SQL Server if you want to
> use all 8GB.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Jeffrey K. Ericson" wrote:
|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/default...b;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.
|||Do u have the PAE switch and AWE switch enabled ?
If you have done the required configuration, I dont see how the OS is not
able to see
beyond 3GB.
Thanks,
Gopi
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...[vbcol=seagreen]
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
|||Found this in another Post. Have you applied SP4 ?
See the link below for the KB article on this issue. Microsoft are still
working on the problem.
Nick Colebourn (MCDBA)
DBA
United Coop Ltd
"Dimitar Dimitrov" wrote:
> The configuration is:
> OS: Windows 2003 Enterprise Edition
> RAM: 6GB
> boot.ini switches : /fastdetect /PAE /3GB /USERVA=3030
> SQL 2000 Enterprise Edition SP3a (awe enabled; min server memory (MB) =
> max server memory (MB) = 5120)
> Before installing SP4 SQL server uses configured amount of memory
> (5120MB).
> After installing SP4 it doesn't utilize more than 3GB of memory the SQL
> Server Logs reports "Warning: unable to allocate 'min server memory' of
> 5120MB."
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...[vbcol=seagreen]
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/default...b;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.
|||"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that

and/or drivers not compatible with AWE.
Best regards
Wojtek
|||How many transactions per second and/or batch requests per second does your
installation produce?
Ours pushes nearly 1500 hundred of both/either quite often. This coupled
with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
space quite heavily.
We've been piloting a 64-bit system for a while now and are getting ready to
make the conversion. Dropping AWE altogether and going back to Dynamic
Memory management for all of the memory managers including the PROC CACHE
was probably the most compelling reason for the push to convert.
Glad to hear that you haven't begun to hit those limits yet and that it is
still working out for your installation(s).
Sincerely,
Anthony Thomas
"Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that

and/or drivers not compatible with AWE.
Best regards
Wojtek
|||The profile of our system is very different indeed. We have quite a lot of
data, 200+GB, biggest table has about 80mln records, and it is mostly
static. In this scenario, AWE, /pae and /3gb seem to work just fine. But now
I do see what you mean. AFAIK, AWE memory can be used only by data cache, so
it's pretty much limited indeed. Thanks for the info!
Regards
Wojtek
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:%23Wl4GcIaFHA.2076@.TK2MSFTNGP15.phx.gbl...
> How many transactions per second and/or batch requests per second does
> your
> installation produce?
> Ours pushes nearly 1500 hundred of both/either quite often. This coupled
> with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
> space quite heavily.
> We've been piloting a 64-bit system for a while now and are getting ready
> to
> make the conversion. Dropping AWE altogether and going back to Dynamic
> Memory management for all of the memory managers including the PROC CACHE
> was probably the most compelling reason for the push to convert.
> Glad to hear that you haven't begun to hit those limits yet and that it is
> still working out for your installation(s).
> Sincerely,
>
> Anthony Thomas
>
> --
> "Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
> news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
> "Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
> news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> What headaches do you mean? We're using AWE for a few years, and I have
> not
> had any headaches because of that

> and/or drivers not compatible with AWE.
> Best regards
> Wojtek
>
Buffer Cache Hit Ratio vs Memory Manager Cache Hit ratio
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.Hi
You would need Enterprise Edition of Windows and SQL Server if you want to
use all 8GB.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Jeffrey K. Ericson" wrote:
> We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
> trying ascertain whether or not upgrading to a OS that will see the full
> monty will help or redploying the assets would be a better option
> The Buffer Cache Hit Ratio - 99%
> The Memory Manager Cache Hit Ratio = 77%
> The Target and actual values for Total Memory and Buffer Pages are the sam
e
> The memory pages/sec swings wildly, generally below 1 it goes up into the
> thousands.
> What is the difference between the two Cache Hit Ratio objects
> Note: Vendor supplied DB is horrendous in design.|||Yes, we have EE of SQL running on 2k standard. We are contemplaing going to
a 2003 version that can support the 8GB.
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> You would need Enterprise Edition of Windows and SQL Server if you want to
> use all 8GB.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Jeffrey K. Ericson" wrote:
>|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/defaul...kb;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.|||Do u have the PAE switch and AWE switch enabled ?
If you have done the required configuration, I dont see how the OS is not
able to see
beyond 3GB.
Thanks,
Gopi
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...[vbcol=seagreen]
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
>|||Found this in another Post. Have you applied SP4 ?
See the link below for the KB article on this issue. Microsoft are still
working on the problem.
--
Nick Colebourn (MCDBA)
DBA
United Coop Ltd
"Dimitar Dimitrov" wrote:
> The configuration is:
> OS: Windows 2003 Enterprise Edition
> RAM: 6GB
> boot.ini switches : /fastdetect /PAE /3GB /USERVA=3030
> SQL 2000 Enterprise Edition SP3a (awe enabled; min server memory (MB) =
> max server memory (MB) = 5120)
> Before installing SP4 SQL server uses configured amount of memory
> (5120MB).
> After installing SP4 it doesn't utilize more than 3GB of memory the SQL
> Server Logs reports "Warning: unable to allocate 'min server memory' of
> 5120MB."
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...[vbcol=seagreen]
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
>|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/defaul...kb;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.|||"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that
The only headache could be hardware
and/or drivers not compatible with AWE.
Best regards
Wojtek|||How many transactions per second and/or batch requests per second does your
installation produce?
Ours pushes nearly 1500 hundred of both/either quite often. This coupled
with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
space quite heavily.
We've been piloting a 64-bit system for a while now and are getting ready to
make the conversion. Dropping AWE altogether and going back to Dynamic
Memory management for all of the memory managers including the PROC CACHE
was probably the most compelling reason for the push to convert.
Glad to hear that you haven't begun to hit those limits yet and that it is
still working out for your installation(s).
Sincerely,
Anthony Thomas
"Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that
The only headache could be hardware
and/or drivers not compatible with AWE.
Best regards
Wojtek|||The profile of our system is very different indeed. We have quite a lot of
data, 200+GB, biggest table has about 80mln records, and it is mostly
static. In this scenario, AWE, /pae and /3gb seem to work just fine. But now
I do see what you mean. AFAIK, AWE memory can be used only by data cache, so
it's pretty much limited indeed. Thanks for the info!
Regards
Wojtek
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:%23Wl4GcIaFHA.2076@.TK2MSFTNGP15.phx.gbl...
> How many transactions per second and/or batch requests per second does
> your
> installation produce?
> Ours pushes nearly 1500 hundred of both/either quite often. This coupled
> with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
> space quite heavily.
> We've been piloting a 64-bit system for a while now and are getting ready
> to
> make the conversion. Dropping AWE altogether and going back to Dynamic
> Memory management for all of the memory managers including the PROC CACHE
> was probably the most compelling reason for the push to convert.
> Glad to hear that you haven't begun to hit those limits yet and that it is
> still working out for your installation(s).
> Sincerely,
>
> Anthony Thomas
>
> --
> "Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
> news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
> "Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
> news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> What headaches do you mean? We're using AWE for a few years, and I have
> not
> had any headaches because of that
The only headache could be hardware
> and/or drivers not compatible with AWE.
> Best regards
> Wojtek
>
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.Hi
You would need Enterprise Edition of Windows and SQL Server if you want to
use all 8GB.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Jeffrey K. Ericson" wrote:
> We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
> trying ascertain whether or not upgrading to a OS that will see the full
> monty will help or redploying the assets would be a better option
> The Buffer Cache Hit Ratio - 99%
> The Memory Manager Cache Hit Ratio = 77%
> The Target and actual values for Total Memory and Buffer Pages are the sam
e
> The memory pages/sec swings wildly, generally below 1 it goes up into the
> thousands.
> What is the difference between the two Cache Hit Ratio objects
> Note: Vendor supplied DB is horrendous in design.|||Yes, we have EE of SQL running on 2k standard. We are contemplaing going to
a 2003 version that can support the 8GB.
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> You would need Enterprise Edition of Windows and SQL Server if you want to
> use all 8GB.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
>
> "Jeffrey K. Ericson" wrote:
>|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/defaul...kb;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.|||Do u have the PAE switch and AWE switch enabled ?
If you have done the required configuration, I dont see how the OS is not
able to see
beyond 3GB.
Thanks,
Gopi
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...[vbcol=seagreen]
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
>|||Found this in another Post. Have you applied SP4 ?
See the link below for the KB article on this issue. Microsoft are still
working on the problem.
--
Nick Colebourn (MCDBA)
DBA
United Coop Ltd
"Dimitar Dimitrov" wrote:
> The configuration is:
> OS: Windows 2003 Enterprise Edition
> RAM: 6GB
> boot.ini switches : /fastdetect /PAE /3GB /USERVA=3030
> SQL 2000 Enterprise Edition SP3a (awe enabled; min server memory (MB) =
> max server memory (MB) = 5120)
> Before installing SP4 SQL server uses configured amount of memory
> (5120MB).
> After installing SP4 it doesn't utilize more than 3GB of memory the SQL
> Server Logs reports "Warning: unable to allocate 'min server memory' of
> 5120MB."
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:8FCCB4AB-E082-40AE-B508-EECCB6603033@.microsoft.com...[vbcol=seagreen]
> Yes, we have EE of SQL running on 2k standard. We are contemplaing going
> to
> a 2003 version that can support the 8GB.
> "Mike Epprecht (SQL MVP)" wrote:
>|||Those are good questions; however, you will find that using Win2K AS or
Win2K3 EE not much different, but although AWE will momentarily ease memory
pressures, the added headaches it introduces will make the upgrades
short-lived. By going that route, you could extend the life of your system
for a year or two at best, but then you will be quickly running into the
limitations of the 32-bit architecture.
Since the OS and SQL Server licenses can be redeployed, and upgraded to
their 64-bit counterparts without charge, you could go ahead and upgrade
immediately but consider 64-bit solutions long term, IA64, not that silly
wanna-be, x64, EMT64 garbage.
It kind of depends on the usage. I would look at the SQL Server:Databases,
Transactions per second for each database, or the _Total instance, and the
SQL Server:General Statistics, Batch Requests per second to get an idea of
how busy your box is.
As for the two Cach hit Ratios: the Buffer Manager actually has five (5)
seperate memory managers associated with it...that is, the Buffer Pool. The
Data Buffer, which is just one of the memory managers, stores and manages
the data page cache and is what this counter is referring to. Whenever data
is requested to satisfy a query, if the data pages supporting that request
is currently resident in memory, you have a Data Buffer Cache Hit, otherwise
it is a miss and Disk I/O must be performed to load the requested pages.
You want a high cache hit, anything below 90% and you would be pretty
sluggish. Even at 98%, you are 10 times as inefficient than at 99%.
Now, the other cached items are the procedure plans in the PROCCACHE. The
Cache Manager, Cache Hit Ratio measures this and, actually, has multiple
instances, the _Total and one for each type of execution object: Procedures,
Ad-Hoc, Execution Contexts, Triggers, Cursors, Misc. Normalized Trees, etc.,
etc., etc. You should look at all of the instances as well as the _Total,
which you are quoting a figure of 77%. The instances can clue you in on
whether you have a memory limitation issue or just bad coding.
Now, let this be known, if you go the AWE route, it will be mostly for
additional Data Page buffering, not the Proc Cache. Welcome to the
limitations of the 32-bit arcitecture. No matter what you do, you are
limited to the 2 GB/3 GB lower memory region for Proc Cache storage. Now,
it is possible to offload some of the Data Buffer to the AWE space thereby
leaving more lower memory region for a larger than currently allocated Proc
Cache, but once you go AWE, SQL Server looses some of its flexibility and
dynamicity. Chances are your Proc Cache will not be reallocated much. In
fact, whenever there are memory pressures, the proc cache is usually the
first thing that is flushed or paged out to the swap file.
So, long term, you have two choices: go multi-instanced, either on the same
server, or by installing multiple servers, or took a real good look at IA64,
64-bit platforms. Dell has two low-entry versions currently, their 3250 and
7250 series.
Spend some time with the DBCC MEMORYSTATUS and DBCC PROCCACHE. These will
give you a lot of insight into how your installations is partitioning it
various memory allocations.
INF: Using DBCC MEMORYSTATUS to Monitor SQL Server Memory Usage
http://support.microsoft.com/defaul...kb;en-us;271624
Hope this helps.
Sincerely,
Anthony Thomas
"Jeffrey K. Ericson" <JeffreyKEricson@.discussions.microsoft.com> wrote in
message news:CAB56205-326F-41A1-8E42-20985CBE4CFF@.microsoft.com...
We have a box that has 8GB of RAM but the OS only lets us see 3GB. We are
trying ascertain whether or not upgrading to a OS that will see the full
monty will help or redploying the assets would be a better option
The Buffer Cache Hit Ratio - 99%
The Memory Manager Cache Hit Ratio = 77%
The Target and actual values for Total Memory and Buffer Pages are the same
The memory pages/sec swings wildly, generally below 1 it goes up into the
thousands.
What is the difference between the two Cache Hit Ratio objects
Note: Vendor supplied DB is horrendous in design.|||"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that

and/or drivers not compatible with AWE.
Best regards
Wojtek|||How many transactions per second and/or batch requests per second does your
installation produce?
Ours pushes nearly 1500 hundred of both/either quite often. This coupled
with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
space quite heavily.
We've been piloting a 64-bit system for a while now and are getting ready to
make the conversion. Dropping AWE altogether and going back to Dynamic
Memory management for all of the memory managers including the PROC CACHE
was probably the most compelling reason for the push to convert.
Glad to hear that you haven't begun to hit those limits yet and that it is
still working out for your installation(s).
Sincerely,
Anthony Thomas
"Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> Those are good questions; however, you will find that using Win2K AS or
> Win2K3 EE not much different, but although AWE will momentarily ease
> memory
> pressures, the added headaches it introduces will make the upgrades
> short-lived.
What headaches do you mean? We're using AWE for a few years, and I have not
had any headaches because of that

and/or drivers not compatible with AWE.
Best regards
Wojtek|||The profile of our system is very different indeed. We have quite a lot of
data, 200+GB, biggest table has about 80mln records, and it is mostly
static. In this scenario, AWE, /pae and /3gb seem to work just fine. But now
I do see what you mean. AFAIK, AWE memory can be used only by data cache, so
it's pretty much limited indeed. Thanks for the info!
Regards
Wojtek
"Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
news:%23Wl4GcIaFHA.2076@.TK2MSFTNGP15.phx.gbl...
> How many transactions per second and/or batch requests per second does
> your
> installation produce?
> Ours pushes nearly 1500 hundred of both/either quite often. This coupled
> with extended proc, FOR XML, and LOB usage, constrains the 32-bit memory
> space quite heavily.
> We've been piloting a 64-bit system for a while now and are getting ready
> to
> make the conversion. Dropping AWE altogether and going back to Dynamic
> Memory management for all of the memory managers including the PROC CACHE
> was probably the most compelling reason for the push to convert.
> Glad to hear that you haven't begun to hit those limits yet and that it is
> still working out for your installation(s).
> Sincerely,
>
> Anthony Thomas
>
> --
> "Wojtek Garwol" <garwol@.usunto.poczta.fm> wrote in message
> news:d7qh4d$7v9$1@.nemesis.news.tpi.pl...
> "Anthony Thomas" <ALThomas@.kc.rr.com> wrote in message
> news:OzSLCxEaFHA.1152@.tk2msftngp13.phx.gbl...
> What headaches do you mean? We're using AWE for a few years, and I have
> not
> had any headaches because of that

> and/or drivers not compatible with AWE.
> Best regards
> Wojtek
>
buffer cache hit ratio estimation
Hi
I have trouble with MSSQL2000 SP4 (without any hotfixes). During last two
weeks it start works anormally. After last optimalization (about few months
ago) it works good (fast, without blocks). Its buffer cache hit ratio was
about 99.7-99.8. Last day it starts work slow, there was many blocks and
dedlocks. There are no any queries, jobs and applications was added. Now
buffer cache hit ratio oscilate about 95-98. I try update statistics and
reindex some hard used tables, but there is no effect or effect is wery
short (after few hours problem return).
Mayby somene know what it could be?
Is it possible to estimate how each table (using DBCC SHOW_STATISTICS or
DBCC SHOWCONTIG or others) how the table affect on total buffer cache hit
ratio?
Marek
--
www.programowanieobiektowe.plChances are you are now scanning a table(s) where as before you were doing
seeks. You need to profile the queries to see which are the poorly
performing ones (ones with high duration and reads) and address why.
Andrew J. Kelly SQL MVP
"Marek Wierzbicki" <marek.wierzbickiiiii@.azymuttttt.pl> wrote in message
news:ed8rmq$1h4p$1@.news2.ipartners.pl...
> Hi
> I have trouble with MSSQL2000 SP4 (without any hotfixes). During last two
> weeks it start works anormally. After last optimalization (about few
> months ago) it works good (fast, without blocks). Its buffer cache hit
> ratio was about 99.7-99.8. Last day it starts work slow, there was many
> blocks and dedlocks. There are no any queries, jobs and applications was
> added. Now buffer cache hit ratio oscilate about 95-98. I try update
> statistics and reindex some hard used tables, but there is no effect or
> effect is wery short (after few hours problem return).
> Mayby somene know what it could be?
> Is it possible to estimate how each table (using DBCC SHOW_STATISTICS or
> DBCC SHOWCONTIG or others) how the table affect on total buffer cache hit
> ratio?
> Marek
> --
> www.programowanieobiektowe.pl|||> Chances are you are now scanning a table(s) where as before you were doing
> seeks. You need to profile the queries to see which are the poorly
> performing ones (ones with high duration and reads) and address why.
profiler didn't show any long time and expansive queries, but some simple
queries work about minute (not secounds like earlier)
Marek|||Then you need to find out why. Are there high disk queues? High CPU? Is
there blocking for those queries? You have to narrow it down so you can
start addressing the real issue. Maybe these will help:
http://www.sql-server-performance.com/sql_server_performance_audit10.asp
Performance Audit
http://www.microsoft.com/technet/prodtechnol/sql/2005/library/operations.mspx
Performance WP's
http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon counters
http://www.sql-server-performance.com/sql_server_performance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Tempdb in 2005:
http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/WorkingWithTempDB.doc
Physical Database Storage:
http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/PhysDBStor.doc
--
Andrew J. Kelly SQL MVP
"Marek Wierzbicki" <marek.wierzbickiiiii@.azymuttttt.pl> wrote in message
news:edh8j6$4jj$1@.news2.ipartners.pl...
>> Chances are you are now scanning a table(s) where as before you were
>> doing seeks. You need to profile the queries to see which are the poorly
>> performing ones (ones with high duration and reads) and address why.
> profiler didn't show any long time and expansive queries, but some simple
> queries work about minute (not secounds like earlier)
>
> Marek
>|||> Then you need to find out why. Are there high disk queues? High CPU? Is
> there blocking for those queries? You have to narrow it down so you can
> start addressing the real issue. Maybe these will help:
I will try Your links. during hi trouble periods disk quees is over 50-60
Marek
I have trouble with MSSQL2000 SP4 (without any hotfixes). During last two
weeks it start works anormally. After last optimalization (about few months
ago) it works good (fast, without blocks). Its buffer cache hit ratio was
about 99.7-99.8. Last day it starts work slow, there was many blocks and
dedlocks. There are no any queries, jobs and applications was added. Now
buffer cache hit ratio oscilate about 95-98. I try update statistics and
reindex some hard used tables, but there is no effect or effect is wery
short (after few hours problem return).
Mayby somene know what it could be?
Is it possible to estimate how each table (using DBCC SHOW_STATISTICS or
DBCC SHOWCONTIG or others) how the table affect on total buffer cache hit
ratio?
Marek
--
www.programowanieobiektowe.plChances are you are now scanning a table(s) where as before you were doing
seeks. You need to profile the queries to see which are the poorly
performing ones (ones with high duration and reads) and address why.
Andrew J. Kelly SQL MVP
"Marek Wierzbicki" <marek.wierzbickiiiii@.azymuttttt.pl> wrote in message
news:ed8rmq$1h4p$1@.news2.ipartners.pl...
> Hi
> I have trouble with MSSQL2000 SP4 (without any hotfixes). During last two
> weeks it start works anormally. After last optimalization (about few
> months ago) it works good (fast, without blocks). Its buffer cache hit
> ratio was about 99.7-99.8. Last day it starts work slow, there was many
> blocks and dedlocks. There are no any queries, jobs and applications was
> added. Now buffer cache hit ratio oscilate about 95-98. I try update
> statistics and reindex some hard used tables, but there is no effect or
> effect is wery short (after few hours problem return).
> Mayby somene know what it could be?
> Is it possible to estimate how each table (using DBCC SHOW_STATISTICS or
> DBCC SHOWCONTIG or others) how the table affect on total buffer cache hit
> ratio?
> Marek
> --
> www.programowanieobiektowe.pl|||> Chances are you are now scanning a table(s) where as before you were doing
> seeks. You need to profile the queries to see which are the poorly
> performing ones (ones with high duration and reads) and address why.
profiler didn't show any long time and expansive queries, but some simple
queries work about minute (not secounds like earlier)
Marek|||Then you need to find out why. Are there high disk queues? High CPU? Is
there blocking for those queries? You have to narrow it down so you can
start addressing the real issue. Maybe these will help:
http://www.sql-server-performance.com/sql_server_performance_audit10.asp
Performance Audit
http://www.microsoft.com/technet/prodtechnol/sql/2005/library/operations.mspx
Performance WP's
http://www.swynk.com/friends/vandenberg/perfmonitor.asp Perfmon counters
http://www.sql-server-performance.com/sql_server_performance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.com/best_sql_server_performance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_perfmon_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Tempdb in 2005:
http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/WorkingWithTempDB.doc
Physical Database Storage:
http://download.microsoft.com/download/4/f/8/4f8f2dc9-a9a7-4b68-98cb-163482c95e0b/PhysDBStor.doc
--
Andrew J. Kelly SQL MVP
"Marek Wierzbicki" <marek.wierzbickiiiii@.azymuttttt.pl> wrote in message
news:edh8j6$4jj$1@.news2.ipartners.pl...
>> Chances are you are now scanning a table(s) where as before you were
>> doing seeks. You need to profile the queries to see which are the poorly
>> performing ones (ones with high duration and reads) and address why.
> profiler didn't show any long time and expansive queries, but some simple
> queries work about minute (not secounds like earlier)
>
> Marek
>|||> Then you need to find out why. Are there high disk queues? High CPU? Is
> there blocking for those queries? You have to narrow it down so you can
> start addressing the real issue. Maybe these will help:
I will try Your links. during hi trouble periods disk quees is over 50-60
Marek
buffer cache hit ratio estimation
Hi
I have trouble with MSSQL2000 SP4 (without any hotfixes). During last two
weeks it start works anormally. After last optimalization (about few months
ago) it works good (fast, without blocks). Its buffer cache hit ratio was
about 99.7-99.8. Last day it starts work slow, there was many blocks and
dedlocks. There are no any queries, jobs and applications was added. Now
buffer cache hit ratio oscilate about 95-98. I try update statistics and
reindex some hard used tables, but there is no effect or effect is wery
short (after few hours problem return).
Mayby somene know what it could be?
Is it possible to estimate how each table (using DBCC SHOW_STATISTICS or
DBCC SHOWCONTIG or others) how the table affect on total buffer cache hit
ratio?
Marek
www.programowanieobiektowe.plChances are you are now scanning a table(s) where as before you were doing
seeks. You need to profile the queries to see which are the poorly
performing ones (ones with high duration and reads) and address why.
Andrew J. Kelly SQL MVP
"Marek Wierzbicki" <marek.wierzbickiiiii@.azymuttttt.pl> wrote in message
news:ed8rmq$1h4p$1@.news2.ipartners.pl...
> Hi
> I have trouble with MSSQL2000 SP4 (without any hotfixes). During last two
> weeks it start works anormally. After last optimalization (about few
> months ago) it works good (fast, without blocks). Its buffer cache hit
> ratio was about 99.7-99.8. Last day it starts work slow, there was many
> blocks and dedlocks. There are no any queries, jobs and applications was
> added. Now buffer cache hit ratio oscilate about 95-98. I try update
> statistics and reindex some hard used tables, but there is no effect or
> effect is wery short (after few hours problem return).
> Mayby somene know what it could be?
> Is it possible to estimate how each table (using DBCC SHOW_STATISTICS or
> DBCC SHOWCONTIG or others) how the table affect on total buffer cache hit
> ratio?
> Marek
> --
> www.programowanieobiektowe.pl|||> Chances are you are now scanning a table(s) where as before you were doing
> seeks. You need to profile the queries to see which are the poorly
> performing ones (ones with high duration and reads) and address why.
profiler didn't show any long time and expansive queries, but some simple
queries work about minute (not secounds like earlier)
Marek|||Then you need to find out why. Are there high disk queues? High CPU? Is
there blocking for those queries? You have to narrow it down so you can
start addressing the real issue. Maybe these will help:
http://www.sql-server-performance.c...nce_audit10.asp
Performance Audit
http://www.microsoft.com/technet/pr...perfmonitor.asp Perfmon counters
http://www.sql-server-performance.c...mance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.c...rmance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/d.../>
on_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Tempdb in 2005:
http://download.microsoft.com/downl...gWithTempDB.doc
Physical Database Storage:
http://download.microsoft.com/downl...
PhysDBStor.doc
Andrew J. Kelly SQL MVP
"Marek Wierzbicki" <marek.wierzbickiiiii@.azymuttttt.pl> wrote in message
news:edh8j6$4jj$1@.news2.ipartners.pl...
> profiler didn't show any long time and expansive queries, but some simple
> queries work about minute (not secounds like earlier)
>
> Marek
>|||> Then you need to find out why. Are there high disk queues? High CPU? Is
> there blocking for those queries? You have to narrow it down so you can
> start addressing the real issue. Maybe these will help:
I will try Your links. during hi trouble periods disk quees is over 50-60
Marek
I have trouble with MSSQL2000 SP4 (without any hotfixes). During last two
weeks it start works anormally. After last optimalization (about few months
ago) it works good (fast, without blocks). Its buffer cache hit ratio was
about 99.7-99.8. Last day it starts work slow, there was many blocks and
dedlocks. There are no any queries, jobs and applications was added. Now
buffer cache hit ratio oscilate about 95-98. I try update statistics and
reindex some hard used tables, but there is no effect or effect is wery
short (after few hours problem return).
Mayby somene know what it could be?
Is it possible to estimate how each table (using DBCC SHOW_STATISTICS or
DBCC SHOWCONTIG or others) how the table affect on total buffer cache hit
ratio?
Marek
www.programowanieobiektowe.plChances are you are now scanning a table(s) where as before you were doing
seeks. You need to profile the queries to see which are the poorly
performing ones (ones with high duration and reads) and address why.
Andrew J. Kelly SQL MVP
"Marek Wierzbicki" <marek.wierzbickiiiii@.azymuttttt.pl> wrote in message
news:ed8rmq$1h4p$1@.news2.ipartners.pl...
> Hi
> I have trouble with MSSQL2000 SP4 (without any hotfixes). During last two
> weeks it start works anormally. After last optimalization (about few
> months ago) it works good (fast, without blocks). Its buffer cache hit
> ratio was about 99.7-99.8. Last day it starts work slow, there was many
> blocks and dedlocks. There are no any queries, jobs and applications was
> added. Now buffer cache hit ratio oscilate about 95-98. I try update
> statistics and reindex some hard used tables, but there is no effect or
> effect is wery short (after few hours problem return).
> Mayby somene know what it could be?
> Is it possible to estimate how each table (using DBCC SHOW_STATISTICS or
> DBCC SHOWCONTIG or others) how the table affect on total buffer cache hit
> ratio?
> Marek
> --
> www.programowanieobiektowe.pl|||> Chances are you are now scanning a table(s) where as before you were doing
> seeks. You need to profile the queries to see which are the poorly
> performing ones (ones with high duration and reads) and address why.
profiler didn't show any long time and expansive queries, but some simple
queries work about minute (not secounds like earlier)
Marek|||Then you need to find out why. Are there high disk queues? High CPU? Is
there blocking for those queries? You have to narrow it down so you can
start addressing the real issue. Maybe these will help:
http://www.sql-server-performance.c...nce_audit10.asp
Performance Audit
http://www.microsoft.com/technet/pr...perfmonitor.asp Perfmon counters
http://www.sql-server-performance.c...mance_audit.asp
Hardware Performance CheckList
http://www.sql-server-performance.c...rmance_tips.asp
SQL 2000 Performance tuning tips
http://www.support.microsoft.com/?id=224587 Troubleshooting App
Performance
http://msdn.microsoft.com/library/d.../>
on_24u1.asp
Disk Monitoring
http://sqldev.net/misc/WaitTypes.htm Wait Types
Tempdb in 2005:
http://download.microsoft.com/downl...gWithTempDB.doc
Physical Database Storage:
http://download.microsoft.com/downl...
PhysDBStor.doc
Andrew J. Kelly SQL MVP
"Marek Wierzbicki" <marek.wierzbickiiiii@.azymuttttt.pl> wrote in message
news:edh8j6$4jj$1@.news2.ipartners.pl...
> profiler didn't show any long time and expansive queries, but some simple
> queries work about minute (not secounds like earlier)
>
> Marek
>|||> Then you need to find out why. Are there high disk queues? High CPU? Is
> there blocking for those queries? You have to narrow it down so you can
> start addressing the real issue. Maybe these will help:
I will try Your links. during hi trouble periods disk quees is over 50-60
Marek
Buffer Cache Hit Ratio
Please advice.
I ran an OLTP Database on a SQL 2K on WIN 2K Advaced
Server in an Active/Passive Cluster configuration. I have
noticed the buffer cache hit ratio dropping down to about
1% sometimes below 1%. Waht can I do to ascertain the
Server needs more memory.
(i.e What memory is available to SQL ,what memory is SQL
requesting and what memory is given.)
I have also noticed the server running out of locks.
Please help urgent.Look at the SQL Server:Memory Manager:Target Server Memory(KB) counter in
Performance Monitor. That is the amount of memory SQL Server would like to
have. Also look at the SQL Server:Memory Manager:Target Server Memory(KB)
counter, which is the total memory SQL Server actually uses.
You can check in Enterprise Manager in the server properties if there are
any limits imposed on the amount of memeory SQL Server can use.
--
Jacco Schalkwijk
SQL Server MVP
"Olutimi" <anonymous@.discussions.microsoft.com> wrote in message
news:07f901c3c47e$e30f25d0$a401280a@.phx.gbl...
> Please advice.
> I ran an OLTP Database on a SQL 2K on WIN 2K Advaced
> Server in an Active/Passive Cluster configuration. I have
> noticed the buffer cache hit ratio dropping down to about
> 1% sometimes below 1%. Waht can I do to ascertain the
> Server needs more memory.
> (i.e What memory is available to SQL ,what memory is SQL
> requesting and what memory is given.)
> I have also noticed the server running out of locks.
>
> Please help urgent.
I ran an OLTP Database on a SQL 2K on WIN 2K Advaced
Server in an Active/Passive Cluster configuration. I have
noticed the buffer cache hit ratio dropping down to about
1% sometimes below 1%. Waht can I do to ascertain the
Server needs more memory.
(i.e What memory is available to SQL ,what memory is SQL
requesting and what memory is given.)
I have also noticed the server running out of locks.
Please help urgent.Look at the SQL Server:Memory Manager:Target Server Memory(KB) counter in
Performance Monitor. That is the amount of memory SQL Server would like to
have. Also look at the SQL Server:Memory Manager:Target Server Memory(KB)
counter, which is the total memory SQL Server actually uses.
You can check in Enterprise Manager in the server properties if there are
any limits imposed on the amount of memeory SQL Server can use.
--
Jacco Schalkwijk
SQL Server MVP
"Olutimi" <anonymous@.discussions.microsoft.com> wrote in message
news:07f901c3c47e$e30f25d0$a401280a@.phx.gbl...
> Please advice.
> I ran an OLTP Database on a SQL 2K on WIN 2K Advaced
> Server in an Active/Passive Cluster configuration. I have
> noticed the buffer cache hit ratio dropping down to about
> 1% sometimes below 1%. Waht can I do to ascertain the
> Server needs more memory.
> (i.e What memory is available to SQL ,what memory is SQL
> requesting and what memory is given.)
> I have also noticed the server running out of locks.
>
> Please help urgent.
Buffer Cache
hi,
How can we check buffer cache ratio of sql server..i don't want to go inside sql counters as i dont have direct access of sever..is it possible by some SP or command.
Regards
SunnyHi,
Execute the below command from Query Analyzer
DBCC PERFMON
Please refer BOL for more info in the above command.
Thanks
Hari
MCDBA
"Sunny" <anonymous@.discussions.microsoft.com> wrote in message
news:ED4DADFD-C6E4-4D03-B2E4-6755EE1C5949@.microsoft.com...
> hi,
> How can we check buffer cache ratio of sql server..i don't want to go
inside sql counters as i dont have direct access of sever..is it possible by
some SP or command.
> Regards
> Sunny
How can we check buffer cache ratio of sql server..i don't want to go inside sql counters as i dont have direct access of sever..is it possible by some SP or command.
Regards
SunnyHi,
Execute the below command from Query Analyzer
DBCC PERFMON
Please refer BOL for more info in the above command.
Thanks
Hari
MCDBA
"Sunny" <anonymous@.discussions.microsoft.com> wrote in message
news:ED4DADFD-C6E4-4D03-B2E4-6755EE1C5949@.microsoft.com...
> hi,
> How can we check buffer cache ratio of sql server..i don't want to go
inside sql counters as i dont have direct access of sever..is it possible by
some SP or command.
> Regards
> Sunny
Buffer Cache
hi,
How can we check buffer cache ratio of sql server..i don't want to go inside
sql counters as i dont have direct access of sever..is it possible by some
SP or command.
Regards
SunnyHi,
Execute the below command from Query Analyzer
DBCC PERFMON
Please refer BOL for more info in the above command.
Thanks
Hari
MCDBA
"Sunny" <anonymous@.discussions.microsoft.com> wrote in message
news:ED4DADFD-C6E4-4D03-B2E4-6755EE1C5949@.microsoft.com...
> hi,
> How can we check buffer cache ratio of sql server..i don't want to go
inside sql counters as i dont have direct access of sever..is it possible by
some SP or command.
> Regards
> Sunny
How can we check buffer cache ratio of sql server..i don't want to go inside
sql counters as i dont have direct access of sever..is it possible by some
SP or command.
Regards
SunnyHi,
Execute the below command from Query Analyzer
DBCC PERFMON
Please refer BOL for more info in the above command.
Thanks
Hari
MCDBA
"Sunny" <anonymous@.discussions.microsoft.com> wrote in message
news:ED4DADFD-C6E4-4D03-B2E4-6755EE1C5949@.microsoft.com...
> hi,
> How can we check buffer cache ratio of sql server..i don't want to go
inside sql counters as i dont have direct access of sever..is it possible by
some SP or command.
> Regards
> Sunny
Friday, February 24, 2012
Browser Refresh
The browser refreshes every 2 seconds when
"Render this report with the most recent data"->"Cache a temporary *" is
selected.
It also refreshes around every 30 seconds when nothing is selected. Has
anyone had an similar issue, I've done a little searching, is their a way to
disable auto-browser refresh entierly?Check if AutoRefresh is selected in the 'Report Properties'.. may be it is
set to 2 seconds..
"Sean Edwards" <Sean Edwards@.discussions.microsoft.com> wrote in message
news:DA2C5B96-DF80-4887-99AF-D87B573EBC00@.microsoft.com...
> The browser refreshes every 2 seconds when
> "Render this report with the most recent data"->"Cache a temporary *" is
> selected.
> It also refreshes around every 30 seconds when nothing is selected. Has
> anyone had an similar issue, I've done a little searching, is their a way
to
> disable auto-browser refresh entierly?|||Thanks, I didn't even realize you could set the reports cache inside of
Visual.net
"RK Balaji" wrote:
> Check if AutoRefresh is selected in the 'Report Properties'.. may be it is
> set to 2 seconds..
>
> "Sean Edwards" <Sean Edwards@.discussions.microsoft.com> wrote in message
> news:DA2C5B96-DF80-4887-99AF-D87B573EBC00@.microsoft.com...
> > The browser refreshes every 2 seconds when
> > "Render this report with the most recent data"->"Cache a temporary *" is
> > selected.
> >
> > It also refreshes around every 30 seconds when nothing is selected. Has
> > anyone had an similar issue, I've done a little searching, is their a way
> to
> > disable auto-browser refresh entierly?
>
>
"Render this report with the most recent data"->"Cache a temporary *" is
selected.
It also refreshes around every 30 seconds when nothing is selected. Has
anyone had an similar issue, I've done a little searching, is their a way to
disable auto-browser refresh entierly?Check if AutoRefresh is selected in the 'Report Properties'.. may be it is
set to 2 seconds..
"Sean Edwards" <Sean Edwards@.discussions.microsoft.com> wrote in message
news:DA2C5B96-DF80-4887-99AF-D87B573EBC00@.microsoft.com...
> The browser refreshes every 2 seconds when
> "Render this report with the most recent data"->"Cache a temporary *" is
> selected.
> It also refreshes around every 30 seconds when nothing is selected. Has
> anyone had an similar issue, I've done a little searching, is their a way
to
> disable auto-browser refresh entierly?|||Thanks, I didn't even realize you could set the reports cache inside of
Visual.net
"RK Balaji" wrote:
> Check if AutoRefresh is selected in the 'Report Properties'.. may be it is
> set to 2 seconds..
>
> "Sean Edwards" <Sean Edwards@.discussions.microsoft.com> wrote in message
> news:DA2C5B96-DF80-4887-99AF-D87B573EBC00@.microsoft.com...
> > The browser refreshes every 2 seconds when
> > "Render this report with the most recent data"->"Cache a temporary *" is
> > selected.
> >
> > It also refreshes around every 30 seconds when nothing is selected. Has
> > anyone had an similar issue, I've done a little searching, is their a way
> to
> > disable auto-browser refresh entierly?
>
>
Subscribe to:
Posts (Atom)