|
|
Question : ASA Audit Trigger -- Get Column Name
|
|
I've been tasked to create an audit trigger on our Sybase DB. I want to put the data in a table like
user_id, Field_Changed, New_Value, Original_Value, Changed_By, Index_Num <--- identity field Autofill Date_Changed <----Current TimeStamp Autofill
This is what I have so far: ----------------------------------------------------- CREATE TRIGGER "User_Change" BEFORE UPDATE OF "Amt_1", "Amt_2", "Amt_3", "Approve_1", "Approv_2", "Mgr_1", .....) ORDER 1 ON "DBA"."USER_INFO" REFERENCING OLD AS Orig NEW AS NewVal */ FOR EACH ROW /* WHEN ( search-condition ) */ BEGIN Insert into Narc_USER_INFO_Change (user_id, Field_Changed, New_Value, Original_Value, Changed_By) Values(Orig.user_id, ; END -----------------------------------------------------
Does anyone have a good way to track the values, fields changed, and the user_ID of the person doing the update without having to do 16 individual triggers for each field I need to track?
I've done similar triggers in T-SQL and PL/SQL, but in Sybase I'm a newbie.
|
Answer : ASA Audit Trigger -- Get Column Name
|
|
O.K. You should be able to create that report from the Audit table, however, you should define the Before and After columns as VARCHAR and convert numeric datatypes so that you don't need to deal with different datatypes in different columns.
The INSERT / SELECT / UNION ALL / SELECT ... syntax should work fine except you will have to add the appropriate CONVERT(varchar,...) functions.
BTW, I understand that you don't want to store a bunch of unchanged data. What you have to weigh is the performance versus storage issues. The time to insert and commit a row is relatively constant, or at least very insensitive to row size until you get very large. If you have plenty of disk, you don't have to worry about the extra storage. If you have a decent performance margin, you can insert multiple rows. Your system might be lightly loaded enough you don't have to be too concerned about performance. Only you can make this judgment call.
Bottom line is that as long as the application always modifies one row at a time and your OLTP load is not too high, your scheme will work just fine.
If you have batch applications modifying the rows in bulk, the combination of the implied cursor on the NEW and OLD tables, the multiple inserts per modified row, and the commit for each modified row may present too much of a performance hit to ignore.
Let us know how it works for you.
Regards, Bill
|
|
|
|
|