| Author |
Topic |
|
a_k_
Starting Member
23 Posts |
Posted - 2004-08-12 : 15:22:19
|
| What is the maximum amount of data that can be copied to data file using bcp utility? We need to download table contain 70GB of information. Do we have to split it by parts? |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2004-08-12 : 15:26:04
|
| 70GB is going to take weeks to bcp in. Seriously, it's going to take quite a long time. You might be limited by the amount of RAM on your system though. I would do this in 100MB batches. Even that size is going to take quite a bit of time. I did not see any file size limitations in BOL for bcp.Tara |
 |
|
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2004-08-12 : 15:38:11
|
| Maybe a database backup with SQL litespeed would be quicker?? That's a total guess. I thought I'd throw that idea at you and hope it helps ya.BCP'ing 70GB is gonna take a long time. I hope you have it BCPing to a disk other than the disk that the data is on. If not, it's going to take twice as long. I'm with Tara on this one though, batches are your friend.Michael<Yoda>Use the Search page you must. Find the answer you will.</Yoda> |
 |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2004-08-13 : 13:48:59
|
| BTW, a 70GB table will be much, much, much larger when you bcp the data out to a file. I'm thinking that it might be a terabyte or so. Tara |
 |
|
|
kselvia
Aged Yak Warrior
526 Posts |
Posted - 2004-08-13 : 19:01:57
|
| If you expect to load this data back into SQL server use the -n flag so BCP will export the data in native mode. That will make numeric and other non character data use much less space, but the output file can only be used by BCP to load back into SQL Server.--Ken"Knowledge is a process of piling up facts; wisdom lies in their simplification." |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2004-08-15 : 17:29:03
|
quote: Originally posted by tduggan 70GB is going to take weeks to bcp in. Seriously, it's going to take quite a long time. You might be limited by the amount of RAM on your system though. I would do this in 100MB batches. Even that size is going to take quite a bit of time. I did not see any file size limitations in BOL for bcp.Tara
Um - I reckon hours rather than weeks if using native format and to/from a local drive - make sure there are no indexes on the table for the bcp in.Depends on the system of course but I would go for 6-7 hours for the insert.Splitting into multiple files using an index would be a good idea though - but I would make them a lot bigger than 100MB.Zip up the files before moving to the destination m/c otherwise the transfer could take longer than anything else.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
byrmol
Shed Building SQL Farmer
1591 Posts |
Posted - 2004-08-15 : 19:14:42
|
| If you can get the data onto the DBMS box you can use BULK INSERT. You should also look at parallel loadingand all the command hints that maximize performance for this.It was my understanding that the file size that BCP can handle is limited only by the OS.DavidM"Always pre-heat the oven" |
 |
|
|
MuadDBA
628 Posts |
Posted - 2004-08-16 : 12:00:36
|
quote: Originally posted by tduggan BTW, a 70GB table will be much, much, much larger when you bcp the data out to a file. I'm thinking that it might be a terabyte or so. Tara
I don't get this...70GB will turn into 1TB? From experience, I can say that if you bcp the data out in ASCII format it is more than likely going to be SMALLER than the table itself. Also, because you don't have to export the indexes with the data, you should be looking at a substantial size reduction.If you use native format, it will be a little bigger but somewhat quicker. |
 |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2004-08-16 : 12:03:29
|
| I was thinking without the indexes. I have not seen the bcp file smaller than the table. I might test this out on one of our systems that has tables this large.Tara |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2004-08-16 : 12:51:11
|
| Depends on the table.You will save on fragmentation, unused space in pages, page header and structure info but probably lose on numeric fields, bit fields, dates, binaries etc.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
Kristen
Test
22859 Posts |
Posted - 2004-08-17 : 06:50:34
|
| Just FMI:If you BCP out (comma delimited I suppose??), and do NOT use Native format, will there be any differentiation between EMPTY and NULL varchar columns?Kristen |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2004-09-01 : 10:33:26
|
| I've just bcp'd out and in 10Gb, 31 million rows.The bcp out and in both took a similar amout of time 50 mins (in was slightly shorter on a less powerful system) so hopefully your 70G should take under 7 hrs.Note this is on a 50G table which has been split into a partitioned view - you might consider something like that. This table has data for the current and previous month updated, the rest is static so a view works quite well.Still working on transfering the bcp file - the unzip was taking longer than everything else put together.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
kselvia
Aged Yak Warrior
526 Posts |
Posted - 2004-09-01 : 15:19:23
|
| Nigel, consider Lzop instead of zip for compression. http://www.lzop.org/ It will compress/decompress many files faster than they can be copied from one disk to another without compression.--KenI want to die in my sleep like my grandfather, not screaming in terror like his passengers. |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2004-09-01 : 21:25:00
|
| If only...Bought 10 winzip licences today.But looking for something faster. Does it cope withlarge files? 7zip has a 4G limit.Liked 7zip because the files it generates are compatible with winzip.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
kselvia
Aged Yak Warrior
526 Posts |
Posted - 2004-09-01 : 22:54:59
|
| I don't know what the upper limit is. I believe I have run 25GB files through the Unix version. See http://compression.ca/act-text.html for benchmarks. You will be astounded at how fast it runs. PKZIP compresses a 4MB file 67% in 19 seconds. LZOP 4MB file 49% in 1.4 seconds. A little less compression at 10x the speed. BTW: It's under the GNU public license.--KenI want to die in my sleep like my grandfather, not screaming in terror like his passengers. |
 |
|
|
eyechart
Master Smack Fu Yak Hacker
3575 Posts |
Posted - 2005-09-07 : 18:21:57
|
quote: Originally posted by kselvia Nigel, consider Lzop instead of zip for compression. http://www.lzop.org/ It will compress/decompress many files faster than they can be copied from one disk to another without compression.--KenI want to die in my sleep like my grandfather, not screaming in terror like his passengers.
I know this is an ancient thread, but I have been working with lzop for a while and have always been very impressed by the performance.Some of you who have used lzop may have seen the SMP lzop patch written by Dru Lemley at http://lemley.net/lzop_patches/lzop.html. THis version was written for the unix version of lzop and has a dependency on the pthreads library.long story short, is I was able to get this thing compiled natively on win32 using the pthreads-win32 library from redhat. Now I get compression rates betwen 45 and 60MB per second. File copies between SAN connected disks are actually faster if I compress them then if I do a direct copy. If anyone is interested I can writeup the procedure I used and put some binaries online if you want to test. Btw, I have both the official 1.01 release of lzop and the new 1.02rc1 release working.-ec |
 |
|
|
|