updating record timestamp with linqtosql - linq-to-sql

I have a some tables that include a lastupdatedate fiel, the idea being that anytime the information in a row is altered, the lastupdatedate will be reset to the current date/time Setting lastupdatedate using the client side's current datetime is not a good idea. For starters, they may be in different time zones. Although I could solve this problem by storing UCT however a more serious issue is that the clocks of different users will not all be synchronized. What makes more sense is to just use GETDATE() for the lastupdate parameter in the SQL Update command. This way you are guaranteed that all lastupdatedate values will be relative to the same clock, the one on the SQL server.
In ADO.NET this was easy because you directly submitted a SQL statement to be executed but in LinqToSQL you would typically do SubmitChanges.
Is there any easy way to do this with linqtosql outside of creating a stored procedure ?

Another option you could consider would be to create a Scalar function on the database server which exposes that server's current time. You map that scalar function to a LINQ to SQL function and call that method to get your server's time to set the lastupdatedate on your object prior to SubmitChanges.

I also store last modified date and will grab the data from the web server. It has worked fine so far. If your code is running on only one web server, then the LinqToSQL is executed on that server, therefore you will a have a consistent time source. Will that work?

Related

Application retrieving values at a particular time from sql database

I have a database in MySQL. The values in column named Curr_BaL is updated by different operations performing on it. The application, which is written in Java, accesses that database. When it runs, by default it should retrieve the last updated value. However, I also want to be able to get the value at a specific DATE entered by the user.
I have tried to do my best, but have not successful yet, and my whole application depends on that data.
Your problem is not entirely clear. What I can understand is that you need a way to have your users aware of this "last updated" value.
You have several designs approach for this. I think that the simpler would be to fetch this value when you're authenticating your user, and set it to its session information, so it will be available at any time.
You can also have some kind of service caching this value (since I guess is the same for all users).
A very important thing you didn't mentioned is who updates this value, is an external application? is a process on the same application?.
What I can understand, users date more priority then automaticaly date. Simple way for it's using triggers. Below may be useful:
CREATE OR ALTER trigger on_table_ins for TABLE
active before insert position 0
AS
BEGIN
IF (NEW.DATEFIELD IS NULL) THEN NEW.DATEFIELDD='now';
END
It correct for firebird, so see manual for triggers and insert current date(time) for your RDBMS.

Manipulate the results of getdate() for databases in different time zones

I have an application with limited support for running in different time zones. Essentially, the application has one database/environment for the west cost, and one for the east cost.
The applications figures the current time using getdate().
We don't have source code, so replacing getdate with a custom function isn't an option. Is there some way over overriding the Getdate function? Or is it possible to configure a different time zone for different databases? What about different instances?
SQL Server takes its time directly from the host operating system - it is not timezone-aware and there is no way, on one server, to say this database or this instance is in this timezone, and this other database or other instance is in this other timezone. The only way you would be able to accomplish that is to have completely different servers and turn off any date/time synchronization services (including from the host if they're virtual machines).
You should store UTC time using GETUTCDATE() or SYSUTCDATETIME(). Then you don't need to know which timezone a piece of data came from, and it is always easy to convert it to either of these timezones (or any others you need to support later). ASP.NET, for example, has very extensive timezone support.
You can override whatever the source code is sending using an INSTEAD OF INSERT trigger and ignoring anything where they sent GETDATE(). A quick example:
CREATE TRIGGER dbo.whatever
ON dbo.tablename
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT dbo.tablename(col1, col2, date_created)
SELECT col1, col2, SYSUTCDATETIME() FROM inserted;
-------------------^^^^^^^^^^^^^^ this overrides what they passed
END
GO
You can also consider the new TIMEZONEOFFSET data type, but since it's not DST-aware, I'm not sure if it is valuable or not for your scenario.

Apply a function on all Select statements on table implicitly - SQL Server

Is it possible to apply some function (user defined / system) to selected columns automatically, may be binding it at schema level.
My scenario is I am saving timestamps of record saving in each table automatically, for which I have used getdate() as default value of those columns, It was working fine till we had our own hosting. But since now we are moving to shared hosting and don't know in which timezone the servers shall be placed in future, I am using GETUTCDATE() to get GMT time.
Since a lot of procedures / functions are already in place, I am looking for something where I don't need to convert this UTC time to my local time explicitly.
So that my Select * from MyTable shall give me time in my fixed timezone using the function I've created.
Let me know if its possible by any way.
Thanks.
It's not exactly clear what you want to do, but there's no way to replace what the SELECT statement asks for with something else: what you ask for in a query is what you get. Unless you replace a table with a view with the same name, but that probably isn't the best approach.
Using a view or function would still mean you have to change your code anyway, so why not just UPDATE all data to UTC time and then do the conversion in your application code? SQL Server has no idea what time zone a client is in anyway, so it isn't possible to do the conversion reliably on the server side. Unless perhaps the client sends the local time zone to the server as a parameter or in CONTEXT_INFO, but there wouldn't be much point because doing it in the client would be simpler anyway.
And of course handling it all in the application will give you a much more flexible, robust solution.

What is the best tool to use to transfer Data from Reporting Database to another?

I have a reporting database and have to transfer data from that to another server where we run some other reports or functions on Data. What is the best way to transfer data periodically like months or by-weekly. I can use SSIS but is there anyway I can put some where clause on what rows should be extracted from the source database? like i only want to extract data for a current month. Please do let me know.
Thanks,
Vivek
For scheduling periodic extractions, I'd leave to that SQL Agent.
As for restricting the results by some condition, that's an easy thing. Instead of this (and you should always use SQL Command or SQL Command From Variable over Table Name/Table Name From Variable as they are faster)
Add a parameter. If you're use OLE DB connection manager, your indicator for a variable is ?. ADO.NET will be #parameterName
Now, wire the filter up by clicking the Parameters... button. With OLE DB, it's ordinal position starting at 0. If you wanted to use the same parameter twice, you will have to list it each time or use the ADO.NET connection manager.
The biggest question you will have to answer is how do I identify what row(s) need to go. Possibilities are endless: query into the target database and find most recent modified date for a table or highest key value. You could create a local table that tracks what's been sent and query that. You could perform an incremental load / ETL Instrumentation to identify new/updated/unchanged rows, etc.

Perl: How to copy/mirror remote MYSQL table(s) to another database? Possibly different structure too?

I am very new to this and a good friend is in a bind. I am at my wits end. I have used gui's like navicat and sqlyog to do this but, only manually.
His band info data (schedules and whatnot) is in a MYSQL database on a server (admin server).
I am putting together a basic site for him written in Perl that grabs data from a database that resides on my server (public server) and displays schedule info, previous gig newsletters and some fan interaction.
He uses an administrative interface, which he likes and desires to keep, to manage the data on the admin server.
The admin server db has a bunch of tables and even table data the public db does not need.
So, I created tables on the public side that only contain relevant data.
I basically used a gui to export the data, then insert to the public side whenever he made updates to the admin db (copy and paste).
(FYI I am using DBI module to access the data in/via my public db perl script.)
I could access the admin server directly to grab only the data I need but, the whole purpose of this is to "mirror" the data not access the admin server on every query. Also, some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me. There is however a "time" column which could be utilized to compare to.
I cannot "sync" due to the fact that the structures are different, I only need the relevant table data from only three tables.
SO...... I desire to automate!
I read "copy" was a fast way but, my findings in how to implement were too advanced for my level.
I do not have the luxury of placing a script on the admin server to notify when there was an update.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db.
I would then desire to update or insert the new or changed data to the public servers db.
This "check" could be set up in a cron job I guess or triggered when a specific page loads on the public side. (the same sub routine called by the cron I would assume).
This data does not need to be "real time" but, if he updates something it would be nice to have it appear as quickly as possible.
I have done much reading, module research and experimenting but, here I am again at stackoverflow where I always get great advice and examples.
Much of the terminology is still quite over my head so verbose examples with explanations really help me learn quicker.
Thanks in advance.
The two terms you are looking for are either "replication" or "ETL".
First, replication approach.
Let's assume your admin server has tables T1, T2, T3 and your public server has tables TP1, TP2.
So, what you want to do (since you have different table structres as you said) is:
Take the tables from public server, and create exact copies of those tables on the admin server (TP1 and TP2).
Create a trigger on the admin server's original tables to populate the data from T1/T2/T3 into admin server's copy of TP1/TP2.
You will also need to do initial data population from T1/T2/T3 into admin server's copy of TP1/TP2. Duh.
Set up the "replication" from admin server's TP1/TP2 to public server's TP1/TP2
A different approach is to write a program (such programs are called ETL - Extract-Transform-Load) which will extract the data from T1/T2/T3 on admin server (the "E" part of "ETL"), massage the data into format suitable for loading into TP1/TP2 tables (the "T" part of "ETL"), transfer (via ftp/scp/whatnot) those files to public server, and the second half of the program (the "L") part will load the files into the tables TP1/TP2 on public server. Both halfs of the program would be launched by cron or your scheduler of choice.
There's an article with a very good example of how to start building Perl/MySQL ETL: http://oreilly.com/pub/a/databases/2007/04/12/building-a-data-warehouse-with-mysql-and-perl.html?page=2
If you prefer not to build your own, here's a list of open source ETL systems, never used any of them so no opinions on their usability/quality: http://www.manageability.org/blog/stuff/open-source-etl
I think you've misunderstood ETL as a problem domain, which is complicated, versus ETL as a one-off solution, which is often not much harder than writing a report. Unless I've totally misunderstood your problem, you don't need a general ETL solution, you need a one-off solution that works on a handful of tables and a few thousand rows. ETL and Schema mapping sound scarier than they are for a single job. (The generalization, scaling, change-management, and OLTP-to-OLAP support of ETL are where it gets especially difficult.) If you can use Perl to write a report out of a SQL database, you probably know enough to handle the ETL involved here.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db. I would then desire to update or insert the new or changed data to the public servers db.
If every table you need to pull from has an update timestamp column, then your cron job includes some SELECT statements with WHERE clauses based on the last time the cron job ran to get only the updates. Tables without an update timestamp will probably need a full dump.
I'd use a one-to-one table mapping unless normalization was required... just simpler to my opinion. Why complicate it with "big" schema changes if you don't have to?
some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me.
Limit your queries to only the columns you need (and if there are no BLOBs or exceptionally big columns in what you need) a few thousand rows should not be a problem via DBI with a FETCHALL method. Loop all you want locally, just make as few trips to the remote database as possible.
If a row is has a newer date, update it. I will also have to check for new rows for insertion.
Each table needs one SELECT ... WHERE updated_timestamp_columnname > last_cron_run_timestamp. That result set will contain all rows with newer timestamps, which contains newly inserted rows (if the timestamp column behaves like I'd expect). For updating your local database, check out MySQL's ON DUPLICATE KEY UPDATE syntax... this will let you do it in one step.
... how to implement were too advanced for my level ...
Yes, I have actually done this already but, I have to manually update...
Some questions to help us understand your level... Are you hitting the database from the mysql client command-line or from a GUI? Have you gotten to the point where you've wrapped your SQL queries in Perl and DBI, yet?
If the two databases have different, you'll need an ETL solution to map from one schema to another.
If the schemas are the same, all you have to do is replicate the data from one to the other.
Why not just create identical structure on the 'slave' server to the master server. Then create a small table that keeps track of the last timestamp or id for the updated tables.
Then select from the master all rows changed since the last timestamp or greater than the id. Insert them into the matching table on the slave server.
You will need to be careful of updated rows. If a row on the master is updated but the timestamp doesn't change then how will you tell which rows to fetch? If that's not an issue the process is quite simple.
If it is an issue then you need to be more sophisticated, but without knowing the data structure and update mechanism its a goose chase to give pointers on it.
The script could be called by cron every so often to update the changes.
if the database structures must be different on the two servers then a simple translation step may need to be added, but most of the time that can be done within the sql select statement and maybe a join or two.