Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
smithje
Starting Member
32 Posts |
Posted - 2005-06-23 : 16:34:11
|
Trying to update weekly performance measurements by importing from a csv file captured from a company source. Each week there is a new column of data added to the csv file along with all previous columns. My saved DTS package ignores all new columns in the source file. Looking for examples of how to edit the query in the package that creates the new table by first reading what is in the source file and creating a column for all source data in the destination table. I had planned to delete the destination table prior to running the DTS so it would always create it anew. I do know the names of all the columns that will eventually appear in the source. Is there a way to check if they exist or a way to get a column count on the source that I could use to determine which columns to create in the destinations? |
|
JimL
SQL Slinging Yak Ranger
1537 Posts |
Posted - 2005-06-24 : 15:46:44
|
Ahhhhhhh open the package in EM and LOOK?JimUsers <> Logic |
 |
|
smithje
Starting Member
32 Posts |
Posted - 2005-06-25 : 01:44:02
|
Yes, I have opened and looked at the package and it contains fixed mappings to the original version of the source file. The latest version of the source file contains additional columns and running this saved package ignores them. I am trying to automate the weekly import without having to manually run the DTS wizard each week. I can view and edit the create table statement in the package but I was hoping to write a statement that will check the source file and determine the existence of new columns and create a dynamic method of updating the create table statement and the insert of data into it. |
 |
|
smithje
Starting Member
32 Posts |
Posted - 2005-06-27 : 13:07:37
|
OK, through experimentation I found a "workaround" that satisfies the current need but does not solve the long term need of a dynamic method of addressing future source files. I found that if I create a dummy source file with all known column headers and matching data in the csv format and use it to create the DTS package that we can use the package for future imports with a few extra steps applied to the csv file. Since the real csv files do not contain all of the columns in the original and grow by one column each week we will have the user overwrite the top row of the csv file with one from the dummy file that contains all the column headers. The data rows do successfully load the data and leave "null" the additional columns. The long term problem with this process is that each year the column headers will change and we will have to recreate the whole process for multiple csv files.JeffRhetorical responses sans detailed instructions are not helpful. |
 |
|
rockmoose
SQL Natt Alfen
3279 Posts |
Posted - 2005-06-27 : 14:55:27
|
For csv import You could use BULK INSERT or bcp.You can dynamically create / alter a table in SQL, if You know the layout of the source file (layout maybe in another csv ;) ).If the layout changes, doesn't that break everything that references the imported csv file ?rockmoose |
 |
|
|
|
|