Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Table lock, but still page locks?

Author  Topic 

MuadDBA

628 Posts

Posted - 2003-04-29 : 15:32:40
Hello,

I have a query that runs gains my SQL 7.0 SP3 DB that is:

DELETE FROM LOG WHERE TIME_STAMP <= '2003-04-25-16.03.38.07' AND (ID = '' OR ID IS NULL OR NOT EXISTS (SELECT ID FROM QUEUES WHERE LOG.ID = QUEUES.ID ))

This query is provided by a third-partyp roduct, so there isn't a lot of room for manipulation.

The problem is, I get tons and tons of Page-Level Update locks on my table when this statement is executed. I also get one Intent-Exclusive table lock that never seems to take effect (or I assume it doesn't cuz it continues to lock pages and pages of data).

The problem is that I will run out of memory before I can complete enough page level locks to complete the statement, and then it will roll back and cause errors in my application.

Is there any way to structure my table and indexes to help reduce this problem and cause lock escalation, at least to extent locks or something of that nature?

MuadDBA

628 Posts

Posted - 2003-04-29 : 16:12:25
On a related note, the books online say that if the lock configuration parameter is left at zero, it will dynamically determine the amount of locks it needs, until it begins paging.

If I manually set the max # of locks, will it allocate them even if it causes a page, or will it still run out if it senses it will have to start paging?

I know, I know, we need more memory...it's coming, it's 3 months away, but I need to get a handle on this now, even if I do suffer some performance hits from it.

Go to Top of Page

jasper_smith
SQL Server MVP &amp; SQLTeam MVY

846 Posts

Posted - 2003-04-29 : 17:59:37
Can you post the DDL for the table including indexes. Have you got an index on TIME_STAMP ? How much memory installed ? What's the size of the table (rows + space) ?

Locks are always taken top down to prevent deadlocks, the IX table level lock is used to protect the lower level locks and indicate that another process cannot take an X lock on the table. It is not a "serious" attempt to get a table lock (its status will be GRANT not WAIT indicating that it has got the lock it was after). Dynamic lock escalation is indeed based on memory, and is roughly "If (number of locks for a single transaction>1250 or number of locks for single index/table scan>765) and >40% of the server memory pool being used for locks then SQL server starts looking to escalate." The update locks would indicate (without knowing the specifics) that there's not a useful index for the delete and they are used to protect the search for qualifying rows before converting to X locks for the delete operation. Without DDL we can only speculate


HTH
Jasper Smith

0x73656c6563742027546f6f206d7563682074696d65206f6e20796f75722068616e6473203f27
Go to Top of Page

MuadDBA

628 Posts

Posted - 2003-04-29 : 18:06:54
Thanks jasper, I will post the DDL tomorrow, as I am home now.

I don't think there is an index on time_stamp, but I had thought of the same thing. Unfortunately, I tried creating one and it didn't help with the locks at all as far as I could tell.

Thanks for the help, and I'll post DDL tomorrow!

Go to Top of Page

MuadDBA

628 Posts

Posted - 2003-04-30 : 15:48:56
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[LOG]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[LOG]
GO

CREATE TABLE [dbo].[LOG] (
[TYPE] [smallint] NULL ,
[ERROR] [smallint] NULL ,
[TIME_STAMP] [varchar] (26) NOT NULL ,
[ID] [varchar] (44) NULL ,
[NAME] [varchar] (44) NULL ,
[BATCH] [varchar] (44) NULL ,
[O_RESOURCE] [varchar] (20) NULL ,
[QUEUE] [varchar] (20) NULL ,
[NEW_QUEUE] [varchar] (20) NULL ,
[SERVER] [varchar] (20) NULL ,
[RULE_NUM] [smallint] NULL ,
[FORM] [varchar] (44) NULL ,
[PRIORITY] [smallint] NULL ,
[WORKSET] [varchar] (20) NULL ,
[INTVAR1] [int] NULL ,
[INTVAR2] [int] NULL ,
[INTVAR3] [int] NULL ,
[INTVAR4] [int] NULL ,
[INTVAR5] [int] NULL ,
[INTVAR6] [int] NULL ,
[INTVAR7] [int] NULL ,
[INTVAR8] [int] NULL ,
[INTVAR9] [int] NULL ,
[INTVAR10] [int] NULL ,
[INTVAR11] [int] NULL ,
[INTVAR12] [int] NULL ,
[INTVAR13] [int] NULL ,
[INTVAR14] [int] NULL ,
[INTVAR15] [int] NULL ,
[STRVAR1] [varchar] (20) NULL ,
[STRVAR2] [varchar] (20) NULL ,
[STRVAR3] [varchar] (20) NULL ,
[STRVAR4] [varchar] (20) NULL ,
[STRVAR5] [varchar] (20) NULL ,
[STRVAR6] [varchar] (20) NULL ,
[STRVAR7] [varchar] (20) NULL ,
[STRVAR8] [varchar] (20) NULL ,
[STRVAR9] [varchar] (20) NULL ,
[STRVAR10] [varchar] (20) NULL ,
[STRVAR11] [varchar] (20) NULL ,
[STRVAR12] [varchar] (20) NULL ,
[STRVAR13] [varchar] (20) NULL ,
[STRVAR14] [varchar] (20) NULL ,
[STRVAR15] [varchar] (20) NULL
) ON [PRIMARY]
GO

CREATE INDEX [LOG_ID] ON [dbo].[LOG]([ID]) ON [PRIMARY]
GO



Go to Top of Page

jasper_smith
SQL Server MVP &amp; SQLTeam MVY

846 Posts

Posted - 2003-04-30 : 17:26:10
I appreciate there may be questions that you can't answer because this is a third party app but ....

1) The delete will always do a table scan because there's no index on TIME_STAMP confirming the need for the update locks in finding qualifying rows
2) Why is TIME_STAMP varchar and not datetime ?
3) Where's the primary key ?
4) Why is ID varchar(44) seems excessive - is it unique ?
5) If TIME_STAMP was datetime I would consider it a candidate for a clustered index but it's so wide I would be hesitant
6) Run away !!

But seriously, you need to speak with the vendor to get these issues resolved. With no data I can only theorise (and this is dealing only with this query and not taking any adverse effect on other activity) a clustered index on TIME_STAMP would allow a CI seek before a loop join with QUEUES assuming an index on QUEUES.ID. If LOGS.ID is unique (and the index changed to unique) then even better. This of course depends on how many rows are affected. The design of just this one table does not bode well for the rest of the database.

HTH
Jasper Smith

Edited by - jasper_smith on 04/30/2003 17:32:55
Go to Top of Page

MuadDBA

628 Posts

Posted - 2003-04-30 : 17:50:33
Good points, all, and I do agree about the database design. This particular database was designed, I beleive, by ancient greeks who were hesitant to follow what the Arabic scholars of the time called "Good database design".

Typically our Id field is about 17 characters long, but the software has to be flexible enough to allow for even greater id lengths.

I supose there could be a primary key on this table, using ID and time_stamp, but ID in itself is not unique as it merely identifies....{sigh} I guess I will have to explain what this table does.

Each record in our database is identified by the ID field...it is the same in our OBJECTS, QUEUES, etc tables. The LOG table simply tracks movement in the QUEUES table, ie object x was updated from QUEUE y to QUEUE Z, or Object X was deleted, etc.

This table has over a million records in it, as our transaction volume is quite high, and it deletes objects that have no QUEUES record after 4 days.

Yes, I would love to run away, or slap the person who designed this database, or both. But I gotta try to deal with what I got. We are upgrading to a supposedly better version very soon, but I don't know the database design details, if any, that will cause changes.

Why is the time_stamp varchar? Who knows....

I can try a clustered index, as a regular index on TIME_STAMP did not seem to impact my problem of update locks at all.

The vendor is Eastman Kodak, by the way, this is a product called Workflow for Windows and is part of an Imaging system we use here at the bank. The bank is opposed to any software developed especially for it (that isn't developed in-house) as they feel it exposes them to too much risk, but no one seems to be able to deliver a custom solution that's worth a shit from the level of maintenance and management.

Oh well. thanks for the help.

Joe


Go to Top of Page

jasper_smith
SQL Server MVP &amp; SQLTeam MVY

846 Posts

Posted - 2003-04-30 : 18:10:20
I've see worse vendor apps believe me Businesses buy these things and then moan to IT when they run like dogs, same old same old I'm still looking for a good , decent size, vendor app i.e. not ultra specialised, that's not a bag of s##t ! The search goes on.....
Go to Top of Page

MuadDBA

628 Posts

Posted - 2003-05-01 : 10:39:19
Well, I put a clustered index on time_stamp, and still I get a ton of Update Page locks, and some Exclusive page locks.

This is drving me crazy. Now that the data is on a clustered index, you would think it could at least go to extent locks or something. The deletes just must not be puling data contiguously enough to do that.

Arg, thanks for all the help

Go to Top of Page

jasper_smith
SQL Server MVP &amp; SQLTeam MVY

846 Posts

Posted - 2003-05-01 : 12:32:53
I have no data, but the plan with a clustered index on TIME_STAMP for the delete statement in your first post gives me an index seek. Can you run the delete with set statistics profile on and post the output, I'd be interested to see what kind of plan it's coming up with. I am on SQL2000 BTW so that may be a factor. Can you also post the DDL of QUEUES and give an idea of how many rows are trying to be deleted. Extent locks are used for DDL operations rather than DML. SQL will start at row or page and escalate to table , locks can only escalate to a table lock , it's not a gradual process.





HTH
Jasper Smith

0x73656c6563742027546f6f206d7563682074696d65206f6e20796f75722068616e6473203f27
Go to Top of Page

MuadDBA

628 Posts

Posted - 2003-05-01 : 13:24:21
Yes, I get an index seek. I guess I wasn't aware that page locks couldn't escalate to extent locks, I swear I used to see tons of extent locks in SQL 6.5, but didn't know that changed with SQL 7.0

If the only way to go from here is to issue a table lock, I guess I am just sort of hosed and have to figure a way to give SQL enough memory to issue enough locks...I am only trying to delete 120,000 rows, so SQL must be running into memory issues with the other app.

Go to Top of Page

MichaelP
Jedi Yak

2489 Posts

Posted - 2003-05-01 : 14:19:14
I remember reading something sometime where a ton of rows had to be deleted. The solution was something like the following. Basically, instead of deleting all 120k rows at once, delete them in several smaller delete statements

 
SET ROWCOUNT 1000

DELETE FROM MyTable WHERE TIME_STAMP <= DATEADD(dd, -4, CONVERT(varchar(20), getdate(), 101))

SET ROWCOUNT 0


Run that statement at some scheduled intervals so that you do not kill performance. I can't promise that this is the best solution, but it might help you out a bit.

Michael

<Yoda>Use the Search page you must. Find the answer you will.</Yoda>
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-05-01 : 14:25:48
quote:

I remember reading something sometime where a ton of rows had to be deleted. The solution was something like the following. Basically, instead of deleting all 120k rows at once, delete them in several smaller delete statements

 
SET ROWCOUNT 1000

DELETE FROM MyTable WHERE TIME_STAMP <= DATEADD(dd, -4, CONVERT(varchar(20), getdate(), 101))

SET ROWCOUNT 0


Run that statement at some scheduled intervals so that you do not kill performance. I can't promise that this is the best solution, but it might help you out a bit.

Michael

<Yoda>Use the Search page you must. Find the answer you will.</Yoda>



Yes this is what we do for a production job that purges data.

You just need to add a WHILE statement that checks if there are more records to DELETE.

Here is a sample:
DECLARE @cnt int

SELECT @cnt = count(*)
FROM Table1
WHERE Column1 = 'SomeValue'

SET rowcount 20000

while @cnt > 0
begin
DELETE FROM Table1
WHERE Column1 = 'SomeValue'

CHECKPOINT -- to flush the logs, this step is necessary in our environment

SELECT @cnt = count(*)
FROM Table1 WHERE Column1 = 'SomeValue'

end

SET ROWCOUNT 0



It's a stupid example, but I think you get the point. The SELECT and DELETE statement need to be using the same table and WHERE clause for it to work right. Just change those two things to what suits your needs.

Tara
Go to Top of Page

MuadDBA

628 Posts

Posted - 2003-05-01 : 14:26:18
thanks for the suggestion. If it were me, that is how I would approach the solution...however, this is a vendor product that initiates the query, I have no control over it.

About the only thing I am thinking I could do is create a delete trigger and try to work around it that way, but I ain't even sure that is going to help me.

We shall see.

Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-05-01 : 14:43:57
A trigger might be your solution. You could use an INSTEAD OF DELETE trigger. I haven't ever used one, but I think it would work for you. The only problem is whether or not the vendor would complain if they found out. At my last job, we had a vendor that wouldn't allow us to create triggers on a database because they said it would violate the agreement we had with them.

Tara
Go to Top of Page

jasper_smith
SQL Server MVP &amp; SQLTeam MVY

846 Posts

Posted - 2003-05-01 : 14:44:20
Out of interest how much memory has this box got and is it dedicated to SQL ? 120,000 rows is nothing really in the grand scheme of things and shouldn't make SQL sweat. Do you have any non default server settings ?


HTH
Jasper Smith

0x73656c6563742027546f6f206d7563682074696d65206f6e20796f75722068616e6473203f27
Go to Top of Page

MuadDBA

628 Posts

Posted - 2003-05-01 : 14:57:34
I beleive INSTEAD OF triggers are a SQL 2K thing...I'm using 7.0

Jasper, I agree 120K rows is no big deal, and we have 511MB of RAM on this server, so it shouldn't be screwing me up. Obviously something else on that box is using up a good deal of memory and won't give it up. I just don't know what it is, since the imaging system is relatively calm during that time (it's 1am).

Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2003-05-01 : 16:19:08
Ah 7.0, forgot that's what version you were on. That's too bad because I think that the INSTEAD OF DELETE trigger would have worked for you.

Tara
Go to Top of Page

jasper_smith
SQL Server MVP &amp; SQLTeam MVY

846 Posts

Posted - 2003-05-01 : 17:12:20
Does the delete have to be done by the app (is it a rolling archive/tidy up procedure based on date ?) , if not then you could have a job that breaks it down into smaller chunks so that when the app does the delete (if you can't prevent it) then it would have less work to do. Not sure if this would be possible though since it's third party


HTH
Jasper Smith

0x73656c6563742027546f6f206d7563682074696d65206f6e20796f75722068616e6473203f27
Go to Top of Page
   

- Advertisement -