Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 SQL Server Administration (2005)
 Change data type - not enough space

Author  Topic 

patshaw
Posting Yak Master

177 Posts

Posted - 2008-08-20 : 09:44:06
Hi all,

I have a table with about 200 million rows. One column is currently INT data type but this needs to be changed to BIGINT. The problem is that there is simply not enough free disc space on the drive to perform the action. Whichever method or workaround I try I run out of disc space.

Does anyone have any suggestions how I can achieve this using as little free disc space as possible?

Thanks.

blindman
Master Smack Fu Yak Hacker

2365 Posts

Posted - 2008-08-20 : 10:07:44
If you don't have enough disk space to make this change, then you don't have enough disk space to run your database.
The solution is to get more disk space. Considering how cheap drives are these days, I can't understand why anyone would have this problem.

Boycott Beijing Olympics 2008
Go to Top of Page

patshaw
Posting Yak Master

177 Posts

Posted - 2008-08-20 : 10:27:03
The change consumed 80GB of the available 90GB on the drive. This is a 24/7 business critical server so swapping a disc isn't that simple. In any case, under normal operation 90GB is ample space for this database and all the others on the instance to run healthily. I agree that in time something will need to be done and that solution will probably joining this to the SAN but this currently is not a priority to the business.
Go to Top of Page

blindman
Master Smack Fu Yak Hacker

2365 Posts

Posted - 2008-08-20 : 13:26:34
Any transaction against that table is going to need to be logged for rollback, and so will require significant space.

Boycott Beijing Olympics 2008
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2008-08-20 : 14:32:19
200 million rows times an additional 4 bytes for bigint is only 0.74 GB. Even with bigger indexes, it should be 5 GB or less.

There was some other issue that caused you to run out of space. Maybe a bad table design. Maybe you just blew up the transaction log.



CODO ERGO SUM
Go to Top of Page

sodeep
Master Smack Fu Yak Hacker

7174 Posts

Posted - 2008-08-20 : 14:57:44
This change also accounts for additional space in indexes if that column is included in indexes .But 80 GB is too much for this.You must be wrong.
Run sp_spaceused in that table.
Go to Top of Page

blindman
Master Smack Fu Yak Hacker

2365 Posts

Posted - 2008-08-20 : 16:30:30
Its not the additional 4 bytes that is killing him. Its storing the contents of the table as a transaction during the modification.

Boycott Beijing Olympics 2008
Go to Top of Page

rmiao
Master Smack Fu Yak Hacker

7266 Posts

Posted - 2008-08-20 : 23:27:49
How big is the table? Need free space at least as large as table size.
Go to Top of Page

patshaw
Posting Yak Master

177 Posts

Posted - 2008-08-22 : 09:31:09
Yes, I would agree with blindman. If I remember correctly (as I am away from work) the table is about 40GB. I dropped an index on this column and then ran:

BEGIN TRAN
Alter callrslt_archive
alter callresultcode BIGINT
COMMIT TRAN

The change ran for about 4 hours as you would expect on this many rows but when disc space dropped to under 10GB I lost my nerve and killed the tran as this is a live server. The rollback took about 8 hours.

I reclaimed all the space by running a DBCC SHRINKDATABASE on the database as autoshrink is off.

I am now thinking about introducing a new BIGINT column to the table, copying the contents of the original column to the new column, dropping the original column and then renaming the new column plus recreating the index on this column. To my thinking this should not consume too much space at all. Any opinions?
Go to Top of Page

blindman
Master Smack Fu Yak Hacker

2365 Posts

Posted - 2008-08-22 : 09:55:44
If creating the new column doesn't eat up your disk space in the same manner, then I'm pretty sure dropping the old column would. You are still running transactions against the table.

Here is a suggestion:
Create a new empty table that is identical to your existing table, but uses the bigint datatype. Then, find a way to partition your data into 20 or 30 roughly equal sets, using some low-cardinality column. Then write a cursor that loops through each subset, copying it to the new table and then deleting it from your old table. Run both of these operations in a single transaction within the loop.
When you are done, drop your old table and rename your new table.

Whether this will work in your production environment or not is your call, but I'm assuming your prior exploits already locked that table off for a day or two without serious repercussions so if it is off the air a bit maybe that's not a big deal.

Boycott Beijing Olympics 2008
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2008-08-22 : 10:03:24
It would probably be faster to create a new table and copy the data from the old table to the new table using DTS (or SSIS) with a reasonable batch size, say 50,000 rows, to prevent blowing up the transaction log. You could also BCP the data out to a file and then back in to the new table, again with a reasonable batch size

Then rename the old table, rename the new table, rename or create constraints on the new table, create the indexes on the new table, and drop the old table.

When you load the new table, make sure the clustered iindex already exists, and that you load the table in order by clutered index.



CODO ERGO SUM
Go to Top of Page

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2008-08-22 : 10:06:24
why bother with cursor?

how about this:

1. bcp source table to a file.
2. create new empty table with bigint - this is the destination. call it callrslt_archive_NEW
3. bcp the file to the destination table, using -b100000 or something reasonable. that way only 100000 rows are committed each time - this keeps the tlog small.
4. stop here and check that callrslt_archive_NEW is as you expect. number of rows should be the same, etc. when you are satisfied, continue
5. build indexes on callrslt_archive_NEW as appropriate.
6. rename callrslt_archive to callrslt_archive_OLD
7. rename callrslt_archive_NEW to callrslt_archive.
8. drop callrslt_archive_OLD when you feel ready.

EDIT: blast!


elsasoft.org
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2008-08-22 : 10:56:24
Double








CODO ERGO SUM
Go to Top of Page

blindman
Master Smack Fu Yak Hacker

2365 Posts

Posted - 2008-08-22 : 11:41:31
But in both the Copy Table solution and the BCP solution, you are making a complete copy of the table, albeit in smaller batches, but both options are still going to require roughly that same additional space as the original table. One will require it in the database, and the other will require it in the BCP file.

With my solution, I was trying to avoid having two entire copies of the data existing simultaneously.

Boycott Beijing Olympics 2008
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2008-08-22 : 12:03:24
quote:
Originally posted by blindman

But in both the Copy Table solution and the BCP solution, you are making a complete copy of the table, albeit in smaller batches, but both options are still going to require roughly that same additional space as the original table. One will require it in the database, and the other will require it in the BCP file.

With my solution, I was trying to avoid having two entire copies of the data existing simultaneously.

Boycott Beijing Olympics 2008



I understand, but I figured he already had the space, since the DB got so big the last time around.


CODO ERGO SUM
Go to Top of Page

patshaw
Posting Yak Master

177 Posts

Posted - 2008-08-23 : 07:07:19
Thanks for your help guys. I'll tackle this next week and take all your suggestions into consideration.
Go to Top of Page

patshaw
Posting Yak Master

177 Posts

Posted - 2008-08-28 : 13:56:04
Just to let you know I went with Jez's suggestion and it worked perfectly.

Thanks again for all your help guys.
Go to Top of Page

tfountain
Constraint Violating Yak Guru

491 Posts

Posted - 2008-08-28 : 17:48:01
Everytime we've had to do this in the past on large tables, we've alway done the following:

- BCP data out of table to the file system
- Truncate the table
- Issue ALTER TABLE statement
- BCP data back into the table

The only caveats are (a) make sure you have a backup in case of a failure and (b) BCP out to the character format and not native (the data type change will hose up the BCP in).
Go to Top of Page
   

- Advertisement -