Microsoft
Software
Hardware
Network
Question : Is it efficient to have many thousands of tables in a single database?
Just as a starter, we ONLY have access to SQL Server 2000 Enterprise, we cannot afford to get 2005 Enterprise!
I'm at a stage when I'm contemplating a new version of a small vehicle tracking system we have as we may soon franchise out our system to other small providers. Currently there are approx 1500 vehicles being tracked on our system. These vehicles generate anything up to 3,000 tracking events each per day on average, so we tend to get approx 2.5 million events hitting our database per day... and we store up to two months of data per vehicle so, as you can imagine, we have a pretty big database!
At present, due to historical development, the whole system works off a single "events" table for ALL vehicles. This is hit pretty hard and can be a bugger to maintain all the various indices without creating too much of a hit on the db... so I'm considering as an alternative having each vehicle with its own table, so rather than a single "events" table, I'll have (currently) 1500 "events" tables which will be suffixed by the vehicle ID, so events_1, events_2, events_3... events_1500. My rationale for this is that 90% of the time (ok, maybe 70-80%!) the customer chooses an individual vehicle to either track on a map or do a historical report on, therefore there is little need to maintain many indices on each vehicle table beyond the basic ones of, say, datetimestamp and journeyID (each ignition on/off generates a new journeyID).
However, the down side of this approach appears to me to be that firstly there'll be thousands of near-identical tables knocking about the database and there'll surely be more system overhead for having so many individual tables? I know (annoyingly) that in SQL 2005 enterprise it's possible to virutally partition tables on a column (and I could choose vehicleID) but like I said at first above, we can't afford the expense of SQL 2005 enterprise, so this is not an option).
There are many differing reports run from this huge single "events" file at present which use many of the different columns depending on which particular report is being run, so we have ended up with lots of different indexes on the file to cater for all these different reports and with so many events hitting the table each day, it seems to me that the system must have to spend a whole lot of time maintaining indexes...
Does anyone have any sensible opinions to offer on whether it's best to stick with maybe a simplified index single huge table or would I be better off going down the "thousands of tables" route instead?
Answer : Is it efficient to have many thousands of tables in a single database?
personally, i would go w/ the single huge table. That way, all of the data is in one spot, you don't have to go and figure out table names, and you can maintain it as one entity.
Random Solutions
ODBC Link to a Union Query
Remote Access via Internet Explorer
Change WindowMode from acHidden to acDialog after opening it
ms-wpheader changing the gradient
Problem with Zolera SOAP libs with WSDL: response is "text/html", not "text/xml".
Genarate mutilple PDF from Mss ACCESS REPORT
AIX non rootvg backup of two system
create batch action to crop images at predefined x,y co-ordinates - URGENT!
Archive Linux email in/out
Creating online Tutorials (What software is available for this?)