Question : how to run a very large .SQL file as part of a DTS package or Job?

I have a very large .sql file that is basically a script that imports data. The file typically gets generated a few times through the day and is a random file name but of course ends in .sql. The SQL file is basically a massive data import script, Example:

INSERT INTO [MAIN] ( [COLUMN1], [COLUMN2], [COLUMN3], [COLUMN4],[COLUMN5],[COLUMN6], ETC...) VALUES ( 'VALUE1', 'VALUE2', 'VALUE3', 'VALUE4', 'VALUE5', ETC...)

It typically has between 8,000 and 12,000 lines that come from an excel spreadsheet. I know the .sql files are good because I can put them in Query Analyzer and they run ok but when I try and run these larges files with OSQL it errors out. If i cut down the file to half and run it with OSQL it runs just fine. I tried a TSQL step in a SQL Job and it says it is too big.

Is there a way I can import this into my server using a step in a DTS package?

any other suggestions?

Answer : how to run a very large .SQL file as part of a DTS package or Job?

For option 2, import the script, and then execute it line by line....

1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
if exists (select * from information_schema.tables where table_name = 'tmp_staging') drop table tmp_staging
GO
 
create table tmp_staging(sql_line varchar(2000))
GO
 
BULK INSERT TMP_STAGING FROM 'C:\mybigsql.sql'
GO
 
select * from tmp_staging
GO
 
declare @sql varchar(2000)
 
DECLARE c CURSOR FAST_FORWARD READ_ONLY FOR select sql_line from tmp_staging
OPEN c
 
FETCH NEXT FROM c INTO @sql
WHILE @@FETCH_STATUS = 0
BEGIN
      print @sql
      exec (@sql)
      FETCH NEXT FROM c INTO @sql
END
CLOSE c
DEALLOCATE c
Open in New Window Select All
Random Solutions  
 
programming4us programming4us