Question : SqlServer really slow with big table?

I'm a developer, and I have a honking dev box (Quad proc, 2GB of Ram, 7200RM SATA Drives, etc).  Anyway, I've got SQL Server installed, and I'm trying to manipulate a rather large database (5.5 GB) most of which comes from one table which has about 8 Millions records in it.

My question is this - on the machine I've described, working with about 8 million records (as described below) - how fast should I expect things to be?  

The table has about 8 fields (some int, bigint, datetime fields and 1 varchar(5000) field).

Specifically, there are a number of things I'm trying to figure out.  I've never worked with a table this big before, so I'm not sure what the "expected" performance would be - that's really what I'm trying to figure out.  

Like, the last column in the table is a BIT field.  The command "UPDATE Field8 = 1;" takes about 15-20 minutes to complete.  (updating 8 million records).  I've potentially got something ELSE Going on with my machine, and I'm trying to figure out if that query SHOULD take 15-20 minutes.

Or, I've got Full-Text-Search turned on for the VARCHAR(5000) column, and some FTS queries also take 10-30 minutes to complete.  Like:

SELECT TOP 50 * FROM Table WHERE Date >= '2/1/08' AND DATE < '2/2/08'

returns about 5000 records and takes about 10 seconds to complete (the date column is also indexed).  But:

SELECT TOP 50 * FROM Table WHERE Date >= '2/1/08' AND DATE < '2/2/08' AND CONTAINS(VarCharField, 'sometext')

Returns 3000 records and might take 8 minutes to complete.

Is this normal?  I thought FTS was SUPER SUPER fast?

Any thoughts?   Thanks in advance.

EJ

Answer : SqlServer really slow with big table?

Well, in the case of a delete, indexes do definelty matter, but in a different way.  Whenever you do any type of data manipulation (insert/update/delete), the data changes, and the indexes must change as well.  Since you are deleting that many records from a table so big, it is going to take awhile...But, there are a couple of things you can do.  First, you can just delete in smaller increments.  If you are using 2005 (which you posted under), you can delete TOP(x).  That way, not so many locks occur on your table for a really long amount of time.  
DELETE TOP(5000) FROM Table

You can just keep doing this until all of the necessary records are deleted.  Also, since you have a full text search on, it may take even longer because of your large varchar field.  Have you ever considered moving this varchar field to its own table, and just keeping an ID back to your main table?  It would require a join, but I believe you'd see some performance gains in terms of DML.  

Also, since you are using 2005, have you looked at using partitions?  That honestly could be what you really need with such a large data set...

Let me know if this helps, or if you have any more questions,
Tim Chapman
Random Solutions  
 
programming4us programming4us