Showing posts with label explain. Show all posts
Showing posts with label explain. Show all posts

Wednesday, March 7, 2012

bug - insert miliseconds

I noticed that my tables are not storing the milisecond values that I
am inserting. Is this a bug? If not, can someone please explain what
is going or or point me to a resource so I can research it further.

create table #t (col1 int identity,col2 datetime)
insert into #t (col2)
SELECT '2006-03-21 18:59:50.985'
select * from #tHi Dave,

Thats curious, in SQL 2000 this script:

create table #t (col1 int identity,col2 datetime)
insert into #t (col2)
SELECT '2006-03-21 18:59:50.985'
select * from #t

Results in:
2006-03-21 18:59:50.987

If I look up "datetime and smalldatetime" in Books Online it says:
"Date and time data from January 1, 1753 through December 31, 9999, to an
accuracy of one three-hundredth of a second (equivalent to 3.33 milliseconds
or 0.00333 seconds). Values are rounded to increments of .000, .003, or .007
seconds, as shown in the table. (the table below not included here)"

Maybe this helps you somehow.

--
-Dick Christoph
"Dave" <daveg.01@.gmail.com> wrote in message
news:1143152560.420148.112820@.e56g2000cwe.googlegr oups.com...
>I noticed that my tables are not storing the milisecond values that I
> am inserting. Is this a bug? If not, can someone please explain what
> is going or or point me to a resource so I can research it further.
>
> create table #t (col1 int identity,col2 datetime)
> insert into #t (col2)
> SELECT '2006-03-21 18:59:50.985'
> select * from #t|||Yes, it does. I should have looked this up. It rounds to the nearest
..003 second.

Thanks!

bug - insert miliseconds

I noticed that my tables are not storing the milisecond values that I
am inserting. Is this a bug? If not, can someone please explain what
is going or or point me to a resource so I can research it further.
create table #t (col1 int identity,col2 datetime)
insert into #t (col2)
SELECT '2006-03-21 18:59:50.985'
select * from #tHi Dave,
Thats curious, in SQL 2000 this script:
create table #t (col1 int identity,col2 datetime)
insert into #t (col2)
SELECT '2006-03-21 18:59:50.985'
select * from #t
Results in:
2006-03-21 18:59:50.987
If I look up "datetime and smalldatetime" in Books Online it says:
"Date and time data from January 1, 1753 through December 31, 9999, to an
accuracy of one three-hundredth of a second (equivalent to 3.33 milliseconds
or 0.00333 seconds). Values are rounded to increments of .000, .003, or .007
seconds, as shown in the table. (the table below not included here)"
Maybe this helps you somehow.
--
-Dick Christoph
"Dave" <daveg.01@.gmail.com> wrote in message
news:1143152560.420148.112820@.e56g2000cwe.googlegroups.com...
>I noticed that my tables are not storing the milisecond values that I
> am inserting. Is this a bug? If not, can someone please explain what
> is going or or point me to a resource so I can research it further.
>
> create table #t (col1 int identity,col2 datetime)
> insert into #t (col2)
> SELECT '2006-03-21 18:59:50.985'
> select * from #t
>|||Yes, it does. I should have looked this up. It rounds to the nearest
.003 second.
Thanks!

Saturday, February 25, 2012

Buffer Leaks

I've had this a few times.

Can someone explain what a buffer leak is, and how are they avoided.

I downloaded the ExtraSort demo and in non-debug mode I get loads of errors saying the buffer leaked?

How do I plug the whole? Do I need a big cork?

Simon

Well there are 2 leak messages the buffer manager gives and since you didn't specify the exact message I don't know which one you are getting and they happen in very different scenarios. I will explain them both but they are both a component coding error.

1. This buffer has been orphaned...
This is when the engine is shutting down but a component still has a reference to a buffer. This usually mean the component has actually leaked a buffer since when the engine is shutting down everything else in the dataflow has already finished.

2. A call to ProcessInput unexpectedly kept a reference...
This is when a component calls AddRef on a buffer that it was passed in a process input call. This is not allowed. A component that needs to keep a buffer around that it was passed on process input must call clone on the buffer not addref.

You can't do anything about this problem. The component author must fix their component to resolve these problems.

HTH,
Matt

Friday, February 24, 2012

b-tree structure.....

Can someone explain me or let me know the URL where I can find more details about b-tree structure.

I am more intrested in the calculation that is done in one of the articles below.

- Artcile -

With the 900 byte key, 8 rows can fit per database page. This means there will be 12500000 pages at the leaf level, 1562500 pages at the next level up in the b-tree and so on, giving a total of 12500000 + 1562500 + 195313 + 24415 + 3052 + 382 + 48 + 6 + 1 = 14285717 pages (including 1785717 to store the upper levels of the b-tree).

--

How is 1562500 , 195313 , 24415 etc are calculated.... from 12500000 leaf pages.

I suggest you acquire Kalen's book.

http://www.amazon.com/gp/product/0735609985/102-9365699-1188136?v=glance&n=283155|||

I have the book.

I didn't see anywhere in details talking about the b-tree allocation of pages in intermediate level, as I am looking for.

If you know, please let me know the page/ chapter in the book (may be i missed, I will look into it again).

thanks a lot,

ramanuj

|||

if you might have 8 values per page (since each value is 900 bytes and max page size is 8000 bytes) then for each tree level you'll have 8 times less space occupied since you need the pointers to the leafs on the next level.

So, 12500000 equals 1562500 times 8, 1562500 equals 195313 times 8 , etc....

|||

Approx. how many bytes are needed to hold each record details' (pointers) in intermediate pages, starting leaf page ?

What if I have 100 records per page, does it mean that intermediate pages would occupy 100 times less space @. each level ? How

|||

Each index entry has the following values:

- The clustered index

- The index key

So each record will ocupy the size of the clustered index, plus the sum of the sizes of the individual fields of the index key.

yes - if you have 100 records per page (or 80 bytes per each clustered index size + sum (index key columns) ) you'd have a tree where each level would have 100 less pages than the next

|||

Thanks for the info.

I am still looking forward to understand the intermediate pages content. I understand the leaf pages.

|||Ramanuj,

Kalen has dedicated the entire chapter 8 (page 405-) to index. You should get all of your questions answered there.

b-tree structure.....

Can someone explain me or let me know the URL where I can find more details about b-tree structure.

I am more intrested in the calculation that is done in one of the articles below.

- Artcile -

With the 900 byte key, 8 rows can fit per database page. This means there will be 12500000 pages at the leaf level, 1562500 pages at the next level up in the b-tree and so on, giving a total of 12500000 + 1562500 + 195313 + 24415 + 3052 + 382 + 48 + 6 + 1 = 14285717 pages (including 1785717 to store the upper levels of the b-tree).

--

How is 1562500 , 195313 , 24415 etc are calculated.... from 12500000 leaf pages.

I suggest you acquire Kalen's book.

http://www.amazon.com/gp/product/0735609985/102-9365699-1188136?v=glance&n=283155|||

I have the book.

I didn't see anywhere in details talking about the b-tree allocation of pages in intermediate level, as I am looking for.

If you know, please let me know the page/ chapter in the book (may be i missed, I will look into it again).

thanks a lot,

ramanuj

|||

if you might have 8 values per page (since each value is 900 bytes and max page size is 8000 bytes) then for each tree level you'll have 8 times less space occupied since you need the pointers to the leafs on the next level.

So, 12500000 equals 1562500 times 8, 1562500 equals 195313 times 8 , etc....

|||

Approx. how many bytes are needed to hold each record details' (pointers) in intermediate pages, starting leaf page ?

What if I have 100 records per page, does it mean that intermediate pages would occupy 100 times less space @. each level ? How

|||

Each index entry has the following values:

- The clustered index

- The index key

So each record will ocupy the size of the clustered index, plus the sum of the sizes of the individual fields of the index key.

yes - if you have 100 records per page (or 80 bytes per each clustered index size + sum (index key columns) ) you'd have a tree where each level would have 100 less pages than the next

|||

Thanks for the info.

I am still looking forward to understand the intermediate pages content. I understand the leaf pages.

|||Ramanuj,

Kalen has dedicated the entire chapter 8 (page 405-) to index. You should get all of your questions answered there.