I am looking for a way to manage syncronization between an Access mdb file for an application, and a MySQL schema containing a copy of the same tables. This arose from the fact that the application doesn't support MySQL as a backend, but I am looking for a way to utilize MySQL for other in-office applications using the data the first application generates.
Some givens:
1> We cannot abandon the first application, and it's only compatible with Microsoft SQL Server as a backend server to house data.
2> We are not against using Microsoft SQL server, but the licensing cost is a big concern - as well as rewritting some other Access applications written to use linked tables and seperate mdb files.
3> The database server should be "PHP friendly" for a future expansion project for an internal corporate intraweb.
4> No data needs to be, nor should be allowed to be, accessed from outside the corporate network.
I hope I am not being too obscure, but I don't want to break confidences either - so I am trying to walk a pretty tight rope. If anyone can help, I'd greatly appreciate it.
Synchronizing two databases is very, very complicated if both databases need to be updatable. If one is a slave of the other, it's not nearly as difficult. I have more than once programmed just this kind of synchronization using Access, once with an MDB on a web server that had to be synched with a local data MDB (to incorporate data edited on the website; no edits went back to the website, so one-way synch, but still the need to merge edits on the non-web side), as well as once programming a synch between MySQL on a website and Access in a master (MySQL) / slave (Access) relationship.
On the website, you program a dump of data for each table to a text file. It's helpful to have timestamp fields in the MySQL tables so that you know when records were created and updated. This allows you to select which records to dump since the last data dump (which makes the synchronization of data on the Access side much simpler).
The way I programmed it was to then import the text files into staging tables that were indexed appropriately and linked to the front-end Access app. Once the data dumps are imported into the staging tables, you then have three tasks:
find the new records and append them to the Access data store. This is easily done with an outer join.
deal with the deletions. I'll discuss this later, as it's, well, complicated.
deal with updated records. For this, I wrote DAO code that would write column-by-column SQL statements.
Something like this:
UPDATE LocalTable
SET LocalTable.Field1 = DownloadTable.Field2, LocalTable.Updated = DownloadTable.Updated
WHERE LocalTable.Field1 <> DownloadTable.Field2
Now, obviously, the WHERE clause has to be a bit more complicated than that (you have to deal with NULLs, and you have to use criteria formatted appropriately for the data types, i.e., with "" and ## for text and dates, respectively, and no delimiters for numeric data), but writing the code to do that is pretty easy.
Skeleton code looks something like this:
Dim db As DAO.Database
Dim rsFields As DAO.Recordset
Dim fld As DAO.Field
Dim strSQL As String
Set rsFields = db.OpenRecordset("SELECT TOP 1 Field1, Field2, Field3 FROM LocalTable;")
For Each fld in rsFields
[write your SQL statement and execute it]
Next fld
Set fld = Nothing
rsFields.Close
Set rsFields = Nothing
Set db = Nothing
Now, as I said, the complicated part is writing the WHERE clause for each SQL statement, but that's pretty easy to figure out. Also, note that in your rsFields recordset (which is used only to walk through the fields you want to update) you want to include only the fields that are updatable, so you'd leave out the CREATED field and the PK field (and any other fields that you don't want to update).
Now, for the DELETES.
You might think it's a good idea to simply delete any record in the local table that's not in the remote table. That works fine if it really is a slave database, but so often what is originally a slave ends up getting its own edits. So, in that case, you need to not delete records from the master MySQL database, and instead have a DELETE flag that marks records deleted. You could have different varieties of logic that could clean the deleted records out of the master database (e.g., if you're using date stamps in the records, you could delete all records flagged DELETED with LastUpdated timestamp that is <= the last time you dumped the data; alternatively, you could have the Access app send a text file up to the server with a list of the records that have been successfully deleted from the Access data store). If there are edits in the Access data store, then you'll need some logic for dealing with an edit there on a record that was deleted from the MySQL "master" database.
In summary:
If you have a true master/slave relationship, it's fairly trivial. If you really wanted to do it by brute force, you'd just dump the entirety of all the MySQL data tables to text files, delete all the records in your Access data store and import the text files.
I would tend not to do that, as the first time you need to depart from the pure master/slave relationship, you're hosed and have to rewrite everything from scratch.
The outline I gave above will work very cleanly for master/slave, but will also work well if you have a few fields that are private to the "slave" database, or data that exists in the slave and not in the master (which is the situation I was working with).
Related
My current team of 20 people uses an access database front end on their desktop. The backend of the access database is on a network drive. I have been asked to create an access database front end with MYSQL as the back end.
I have installed the MySQL workbench and the ODBC connector on my computer. I have created the schema and tables and I have connected the front end of the database connected to the MYSQL table I created in workbench. My question is
How do I deploy this for the team to use. I believe I can move the front end of the access dB to the network drive and the team can copy it to their desktop. But what do I do about the backend?
Does the team need to have the ODBC connector installed on their computers?
Should I move the MySQL workbench to the network drive?
PS: I am a new developer and just learning about databases and I am only familiar with the access database please go easy on me.
Thanks.
SO is for questions about coding, so this is OT. However:
One method is described in my article:
Deploy and update a Microsoft Access application with one click.
The old backend, you can archive. It will not be used anymore.
Yes.
Probably not. It is only for you, should you need to modify the
MySQL database.
Well, first up, you STILL need and want to deploy the application part called the front end (FE) to each workstation.
So, after you migrate the data to MySQL, then of course you will use the access linked table manager, and now link the tables to MySQL. How this works is really much the same as you have now. The only difference is that you linked tables now point to the database server (MySQL). From the application point of view, it should work as before.
Like all applications, be it Outlook, Excel, accounting packages? You STILL deploy the application part to each workstation. So just because YOU are now developing and writing software with Access does not mean out of the blue that you now for some strange reason STOP deploying the FE part to each workstation. In fact, you should be deploying a compiled version of your application (a accDE).
A few more tips:
WHEN you link the FE, MAKE SURE you use a FILE dsn. The reason for this is Access converts the links to DSN-less for you. What this means is that once you link the tables, then you are free to deploy the FE to each workstation and it will retain the linked table information WITHOUT you having to setup a DSN connection on each workstation. You will also have to of course deploy the MySQL ODBC driver to each workstation as that is not part of Access nor it is part of your application.
So just because you are now developing software does not suggest nor get you off the hook of deploying that application to each workstation. So, your setup you have now with a FE on each workstation does NOT change one bit.
For the most part, after you migrate the data to MySQL, then likely setup your relationships (say with MySQL workbench) there are several other things you need to keep in mind.
All tables now need a primary key. You likely have this, but Access with a access back end did and could work on tables without a PK. However, for SQL server/MySQL etc., then all tables need that PK.
Next up:
If you have any true/false columns, you MUST set a default up for that column. If such true/false columns have a default value or allow nulls, this will confuse access - so ensure that true/false columns can't have nulls and have default (useally 0) setup on the server side.
Add a "rowversion" column. This is not to be confused with a datetime column. In SQL server this rowversion column is of timestamp data type (a poor name, since the column has zero to do with time - it is simply a column that "versions" the row. This will also eliminate many errors. I don't know what this type of column is called in MySQL, but all tables should have this column (there is zero need to see/use/look at this column on your forms - but it should be part of the table.
All of your forms, reports and code should work as before. For VBA recordset code, you need this:
dim rst DAO.Recordset
dim strSQL as string
strSQL = "SELECT * from tblHotels"
set rst = currentdb.OpenRecordSet(strSQL).
You likely in the past had lots of code as per above.
You now need:
Set rst = CurrentDb.OpenRecordset(strSQL, dbOpenDynaset, dbSeeChanges)
You can generaly do a search and replace (find each .OpenRecordSet, and add the dbOpen and dbSee to each line of code you find. (I put the dbOpenDynaset, dbSeeChanges in my paste buffer. And then do a system wide search. It takes a few minutes at most to find all .openRecordSets
At this point, 99% of your code, forms and everything should work.
The ONE got ya?
In access be it on a form or in VBA recordSet code? When you create a new row, or start typing on/in a form? Access with access back end would generate the PK at that point. There was/is no need to save the record to get the PK value. This need is rare, and in a large application I only had about 2 or 3 places where this occured.
So, if you have code like this:
dim rstRecords as RecordSet
dim lngPK as Long ' get PK value of new record
Set rstRecords = CurrentDb.OpenRecordset("tblHotels")
rstRecords.AddNew
' code here sets/ add data/ set values
rstRecords!HotelName = "Mount top Hotel"
rstRecords!City = "Jasper"
' get PK of this new record
lngPK = rstRecods!ID ' get PK
rstRecords.Update
So, in above code, I am grabbing the PK value. But with server systems, you can NOT get the PK value until AFTER the record been saved. So, you have to change above to
rstRecords.Update
rstRecords.Bookmark = rstRecords.LastModified
' get PK of this new record
lngPK = rstRecods!ID ' get PK
Note how we grab/get the PK AFTER we save.
The bookmark above simply re-sets the record pointer to the new record. This is a "quirk" of DAO, and WHEN adding new reocrds, a .Update command moves the record pointer, and thus you move it back with the above .LastModified. You ONLY need the .LastMOdifed trick for NEW reocords. For existing, you already have the PK - so it don't matter.
This type of code is quite rare, but sometimes in a form (and not VBA reocdset code), some form code we might use/need/get/grab the PK value. In a form, you can do this:
if me.dirty = true then me.dirty = false ' save reocod
lngPK = me.ID ' get PK
So, once again, in the above form code, I make sure the record is saved first, and THEN I grab the PK value as above. Of course this is ONLY a issue for NEW records. (and you don't have to use the bookmark trick - that's only for recrodsets - not bound forms.
So, just keep in mind in the few cases when you need the PK value of a NEW record, you have to do this AFTER you saved the recordset (update) or in the case of a form, after you forced a form record save. This requirement is only for new records and ALSO only when your code needs to get/grab the new PK value.
Other then the above two issues? All the rest of your existing code and forms should work as before.
I have an Access Database which I populate using a pass through query.
I get specific data from an SQL database and dumps it on Access.
This Access DB is updated weekly but I cannot simply append new data from the previous week because all other past weeks also updates.
What I did is truncate the Access DB and then load everything again.
When I do that, I need to Compact and Repair the DB so that the size doesn't bloat.
My question is, is it ok to do that? Currently I am using the logic posted in this answer.
I have not encountered any problems yet but I just want to make sure and get our access guru's thought about it. Also I'm planning on doing a scheduled run on our server to do the task.
Just need to make sure that it will not get corrupted easily (what is the chance of corrupting the file in the first place?).
If you'll ask, why do I need to do this? Users of data have no access on SQL server.So I have to pull data for them so they can just connect to Access DB instead.
Just in case you need the code:
Dim sqlDelete As String
Dim sqlAppend As String
sqlDelete = "DELETE * FROM dbo_Table;"
sqlAppend = "INSERT INTO dbo_Table (Col1,Col2) SELECT Col1,Col2 FROM passThrough;"
With DoCmd
.SetWarnings False
.RunSQL sqlDelete
.RunSQL sqlAppend
.SetWarnings True
End With
Application.SetOption "Auto Compact", True
If you need to truncate the data and load again I would recommend to move all tables, which should be truncated and all temporary tables to separate database.
After separating it will be possible to replace this database with unnecessary data by empty database file when the application starts. Just save a copy of empty database and copy this file over existing database with old data. This should be done before opening any form/recordset based on tables from temp database. It will work faster than C&R each time and more reliable, because sometimes C&R may damage the file.
Template database file also can be stored in main database in Memo field
I will refer this forum post: http://www.utteraccess.com/forum/index.php?showtopic=1561733
I'm looking for a way to implement SCM over multiple Access 07 databases. We just need source control for the forms/code.
Essentially we have 100+ databases that are structured the same, use the same code/modules, but each contains data for just one client. When we make a code change we currently have to manually go through each file to make the change.
Has anyone implemented source control for something similar(God help you if so) or have any ideas?
PS - I realize there is lots of DailyWTFery in here, this is a legacy product I've been assigned to do some emergency maintenance on before we rewrite to .NET/MSSQL, but I think there's enough work to warrant putting this in place if it's possible.
You can find out more how to do SCM in this question; it does involve exporting and importing the code and forms using some undocumented (or poorly documented) commands. I also recall issues with checksums or version numbers happening when you used that method.
However, you could solve a lot of this problem by separating the the data and the application sides of the DB into separate files and then adding table links to the data DB from the application DB. Then you would have just one Application DB, and oodles of Client data DBs.
Switching to a different client is as simple as re-linking to the other DB.
You can do that manually or code it with a structure something like this:
Dim myTable As TableDef
Dim db As Database
dim strDBPath as string
strDBPath = ";DATABASE=C:\workbench\MyNewDB.mdb"
Set db = CurrentDb
db.TableDefs.Refresh
For Each myTable In db.TableDefs
If Len(myTable.Connect) > 0 Then
myTable.Connect = strDBPath
myTable.RefreshLink
End If
Next
'' Note: This whips through every table in your database and re-links it to the new
'' DB. ODBC links will have a different value for strDBPath - I'll leave that as
'' and exercise for the reader
I am very new to this and a good friend is in a bind. I am at my wits end. I have used gui's like navicat and sqlyog to do this but, only manually.
His band info data (schedules and whatnot) is in a MYSQL database on a server (admin server).
I am putting together a basic site for him written in Perl that grabs data from a database that resides on my server (public server) and displays schedule info, previous gig newsletters and some fan interaction.
He uses an administrative interface, which he likes and desires to keep, to manage the data on the admin server.
The admin server db has a bunch of tables and even table data the public db does not need.
So, I created tables on the public side that only contain relevant data.
I basically used a gui to export the data, then insert to the public side whenever he made updates to the admin db (copy and paste).
(FYI I am using DBI module to access the data in/via my public db perl script.)
I could access the admin server directly to grab only the data I need but, the whole purpose of this is to "mirror" the data not access the admin server on every query. Also, some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me. There is however a "time" column which could be utilized to compare to.
I cannot "sync" due to the fact that the structures are different, I only need the relevant table data from only three tables.
SO...... I desire to automate!
I read "copy" was a fast way but, my findings in how to implement were too advanced for my level.
I do not have the luxury of placing a script on the admin server to notify when there was an update.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db.
I would then desire to update or insert the new or changed data to the public servers db.
This "check" could be set up in a cron job I guess or triggered when a specific page loads on the public side. (the same sub routine called by the cron I would assume).
This data does not need to be "real time" but, if he updates something it would be nice to have it appear as quickly as possible.
I have done much reading, module research and experimenting but, here I am again at stackoverflow where I always get great advice and examples.
Much of the terminology is still quite over my head so verbose examples with explanations really help me learn quicker.
Thanks in advance.
The two terms you are looking for are either "replication" or "ETL".
First, replication approach.
Let's assume your admin server has tables T1, T2, T3 and your public server has tables TP1, TP2.
So, what you want to do (since you have different table structres as you said) is:
Take the tables from public server, and create exact copies of those tables on the admin server (TP1 and TP2).
Create a trigger on the admin server's original tables to populate the data from T1/T2/T3 into admin server's copy of TP1/TP2.
You will also need to do initial data population from T1/T2/T3 into admin server's copy of TP1/TP2. Duh.
Set up the "replication" from admin server's TP1/TP2 to public server's TP1/TP2
A different approach is to write a program (such programs are called ETL - Extract-Transform-Load) which will extract the data from T1/T2/T3 on admin server (the "E" part of "ETL"), massage the data into format suitable for loading into TP1/TP2 tables (the "T" part of "ETL"), transfer (via ftp/scp/whatnot) those files to public server, and the second half of the program (the "L") part will load the files into the tables TP1/TP2 on public server. Both halfs of the program would be launched by cron or your scheduler of choice.
There's an article with a very good example of how to start building Perl/MySQL ETL: http://oreilly.com/pub/a/databases/2007/04/12/building-a-data-warehouse-with-mysql-and-perl.html?page=2
If you prefer not to build your own, here's a list of open source ETL systems, never used any of them so no opinions on their usability/quality: http://www.manageability.org/blog/stuff/open-source-etl
I think you've misunderstood ETL as a problem domain, which is complicated, versus ETL as a one-off solution, which is often not much harder than writing a report. Unless I've totally misunderstood your problem, you don't need a general ETL solution, you need a one-off solution that works on a handful of tables and a few thousand rows. ETL and Schema mapping sound scarier than they are for a single job. (The generalization, scaling, change-management, and OLTP-to-OLAP support of ETL are where it gets especially difficult.) If you can use Perl to write a report out of a SQL database, you probably know enough to handle the ETL involved here.
1- I would like to set up a script to check a table to see if a row was updated or added on the admin servers db. I would then desire to update or insert the new or changed data to the public servers db.
If every table you need to pull from has an update timestamp column, then your cron job includes some SELECT statements with WHERE clauses based on the last time the cron job ran to get only the updates. Tables without an update timestamp will probably need a full dump.
I'd use a one-to-one table mapping unless normalization was required... just simpler to my opinion. Why complicate it with "big" schema changes if you don't have to?
some tables are THOUSANDS of rows and parsing every row in a loop seemed too "bulky" to me.
Limit your queries to only the columns you need (and if there are no BLOBs or exceptionally big columns in what you need) a few thousand rows should not be a problem via DBI with a FETCHALL method. Loop all you want locally, just make as few trips to the remote database as possible.
If a row is has a newer date, update it. I will also have to check for new rows for insertion.
Each table needs one SELECT ... WHERE updated_timestamp_columnname > last_cron_run_timestamp. That result set will contain all rows with newer timestamps, which contains newly inserted rows (if the timestamp column behaves like I'd expect). For updating your local database, check out MySQL's ON DUPLICATE KEY UPDATE syntax... this will let you do it in one step.
... how to implement were too advanced for my level ...
Yes, I have actually done this already but, I have to manually update...
Some questions to help us understand your level... Are you hitting the database from the mysql client command-line or from a GUI? Have you gotten to the point where you've wrapped your SQL queries in Perl and DBI, yet?
If the two databases have different, you'll need an ETL solution to map from one schema to another.
If the schemas are the same, all you have to do is replicate the data from one to the other.
Why not just create identical structure on the 'slave' server to the master server. Then create a small table that keeps track of the last timestamp or id for the updated tables.
Then select from the master all rows changed since the last timestamp or greater than the id. Insert them into the matching table on the slave server.
You will need to be careful of updated rows. If a row on the master is updated but the timestamp doesn't change then how will you tell which rows to fetch? If that's not an issue the process is quite simple.
If it is an issue then you need to be more sophisticated, but without knowing the data structure and update mechanism its a goose chase to give pointers on it.
The script could be called by cron every so often to update the changes.
if the database structures must be different on the two servers then a simple translation step may need to be added, but most of the time that can be done within the sql select statement and maybe a join or two.
I need to convert data that already exists in a MySQL database, to a SQL Server database.
The caveat here is that the old database was poorly designed, but the new one is in a proper 3N form. Does any one have any tips on how to go about doing this? I have SSMS 2005.
Can I use this to connect to the MySQL DB and create a DTS? Or do I need to use SSIS?
Do I need to script out the MySQL DB and alter every statement to "insert" into the SQL Server DB?
Has anyone gone through this before? Please HELP!!!
See this link. The idea is to add your MySQL database as a linked server in SQL Server via the MySQL ODBC driver. Then you can perform any operations you like on the MySQL database via SSMS, including copying data into SQL Server.
Congrats on moving up in the RDBMS world!
SSIS is designed to do this kind of thing. The first step is to map out manually where each piece of data will go in the new structure. So your old table had four fields, in your new structure fileds1 and 2 go to table a and field three and four go to table b, but you also need to have the autogenerated id from table a. Make notes as to where data types have changed and you may need to make adjustments or where you have required fileds where the data was not required before etc.
What I usually do is create staging tables. Put the data in the denormalized form in one staging table and then move to normalized staging tables and do the clean up there and add the new ids as soon as you have them to the staging tables. One thing you will need to do if you are moving from a denormalized database to a normalized one is that you will need to eliminate the duplicates from the parent tables before inserting them into the actual production tables. You may also need to do dataclean up as there may be required fileds in the new structure that were not required in the old or data converstion issues becasue of moving to better datatypes (for instance if you stored dates in the old database in varchar fields but properly move to datetime in the new db, you may have some records which don't have valid dates.
ANother issue you need to think about is how you will convert from the old record ids to the new ones.
This is not a an easy task, but it is doable if you take your time and work methodically. Now is not the time to try shortcuts.
What you need is an ETL (extract, transform, load) tool.
http://en.wikipedia.org/wiki/Extract,_transform,_load#Tools
I don't really know how far an 'ETL' tool will get you depending on the original and new database designs. In my career I've had to do more than a few data migrations and we usually always had to design a special utility which would update a fresh database with records from the old database, and yes we coded it complete with all the update/insert statements that would transform data.
I don't know how many tables your database has, but if they are not too many then you could consider going the grunt root. That's one technique that's guaranteed to work after all.
If you go to your database in SSMS and right-click, under tasks should be an option for "Import Data". You can try to use that. It's basically just a wizard that creates an SSIS package for you, which it can then either run for you automatically or which you can save and then alter as needed.
The big issue is how you need to transform the data. This goes into a lot of specifics which you don't include (and which are probably too numerous for you to include here anyway).
I'm certain that SSIS can handle whatever transformations you need to do to change it from the old format to the new. An alternative though would be to just import the tables into MS SQL as-is into staging tables, then use SQL code to transform the data into the 3NF tables. It's all a matter of what your most comfortable with. If you go the second route, then the import process that I mentioned above in SSMS could be used. It will even create the destination tables for you. Just be sure that you give them unique names, maybe prefixing them with "STG_" or something.
Davud mentioned linked servers. That's definitely another way that you can go (and got my upvote). Personally, I prefer to copy the tables over into MS SQL first since linked servers can sometimes have weirdness, especially when it comes to data types not mapping between different providers. Having the tables all in MS SQL will also probably be a bit faster and saves time if you have to rerun or correct portions of the data. As I said though, the linked server method would probably be fine too.
I have done this going the other direction and SSIS works fine, although I might have needed to use a script task to deal with slight data type weirdness. SSIS does ETL.