Data sync solution? - mysql

For some security issues I'm in an envorinment where third party apps can't access my DB. For this reason I should have some service/tool/script (dunno what yet... i'm open to the best option, still reading to see what I'm gonna do...)
which enables me to generate on a regular basis(daily, weekly, monthly) some csv file with all new/modified records for a certain application.
I should be able to automate this process and also export at any time a new file.
So it should keep track for each application which records he still needs.
Each application will need some data in some other format (csv/xls/sql), also some fields will be needed for some application and some aren't... It should be fairly flexible...
What is the best option for me? Creating some custom tables for each application? Based on that extracting modified data?

I think you best thing here, assuming you have access to the server to let you set this up is to make a small command line program that can do the relativley simple task you need. Languages like pearl are good for this sort of thing I do believe.
once you have that 'tool' made you can schedule it through the OS of the server to run ever set amount of time. Either schedule task for a windows server or a cronjob for a linux server.
You can also (with out having to set up the scheduled task if you don't / can't want to) enable this small command line application to be called via 'CGI' this is a special way of letting applications on the server be executed at will by a web user. If you do enable this though, I suggest you add some sort of locking system so that it can only be run every so often and to stop it being run five times at once.
EDIT
You might also want to just look into database replication or adding read only users. This saves a hole lot of arseing around. Try to find a solution that dose not split or duplicate data. You can set up users to only be able to access certain parts of the database system in certain ways, such as SELECT data

Related

How to manage "releases" with MS Access

I have an MS Access 2016 application that a few people use in one department. I know this whole thing has web dev written all over it but this access database has been their process for a while and there is no time right now to switch over.
Recently, a different department wants to use this application, but having their own copy. Currently, if I need to make changes, I'll make the changes in a copy of the app, they send me a current version when I'm ready to import their data, I import it and send them back a new one. However, currently I copy the data table by table and past it into the new database. This is inefficient and tedious, and now with 2 sets of data I'd be doing this for, that's crazy. There's over 20 tables so I don't want to have to manually copy over 40+ tables across the 2 apps for even the smallest change like altering a message to the user.
I know I can copy the code so I can avoid importing the data, but sometimes for big changes I'll change over 15-20 vba files.
So, a couple questions:
1.Is there a way to generate insert statements for the entire database that I could run in a script? So when I create the new copy I just upload 1 file and it populates all the data?
2.Are there any kind of dev tools that will help this process? Right now I'm thinking that it's just a downfall of creating an MS Access app, but there must be some way that people have made the "new release" process easier. My current system seems flawed and I'm looking to have a more stable process.
EDIT:
Currently I have all my data stored locally, attached to the same access file as the front end. Since I will have 2 different departments using the same functionality, how do I manage the data/the front-end? These 2 departments should have their own access file to enter data using the forms, so having 1 front end between the 2 departments won't work.
Also, should I create 2 separate back-ends? Currently I would have nothing to distinguish what is being inserted/changed/deleted from one department from the other. If I were to attach a field specifying who entered the record, that would require a complete overall of all my queries which I don't have the time for as there are deadlines I need to meet.
First thing is to split the database. There is a wizard for this.
Then you can maintain the frontend without touching the real data.
Next, consider using a script to distribute revised versions of the frontend. I once wrote an article on one proven method to handle this:
Deploy and update a Microsoft Access application in a Citrix environment

real-time mysql insert query using c

Good Day,
I am trying to learn on how to save my data in MySQL database using C in real-time.
I am using a Raspberry Pi MCU, and an external web server where the data will be saved. I am also using C to get the data from the sensors and would like to save it to my external database, but I do not know how to proceed with this problem as I am not that familiar with using C and MySQL together. though my main concern here is how do i make sure that my data is real-time, or when my sensors get the data it will then be saved to the database.
I'm thinking of doing an infinite loop inside my main and will place an if statement that will serve as a trigger whenever there is a data from the sensors and will save it to the mysql server.
but i am not sure if that is the most efficient way of doing this, that is why if you have any better ideas of how to retrieve my data in real-time using C and saving it to MySQL then it would be greatly appreciated.
in PHP i would have simply made a cron job for this but since i will be doing this in C, I am at lost on how to proceed or if my idea is correct.
You are looking at two independent problem:
Retrieve the data at a fixed interval
Save the data to a database.
For the first, there are two known methods, the first is polling which means stay in a while loop and constantly check if updates are available. The second method is using interrupts, you should choose the most appropriate for your problem but for beginning you can use the first method and when the program works (maybe) move it to interrupts.
For the second, just install MySQL and mysql C connector, just go to their site and download and install it. Its connection is pretty simple and there are a lot of examples online, both for combining and syntax.
An efficient way to do such things is called 'the hardware interrupt'. You should read the docs to check if the hardware supports it.

SQL Server 2012 Data Integration

I'm writing an intranet application (in a LAMP environment) that uses data from sections of an MSSQL 2012 database (used by another much larger application).
As I see it my options are to:
Directly query the database from the application.
Create a web service
Use Microsoft SQL Server Integration Services to have the data
automatically integrated into my applications database
I'm sure the best solution here would be using SSIS, however I've not done this before am on a deadline - so if that's the case could someone let me know
a) With my limited experience in that area would I be able to set that up, and
b) What are the pros and cons of the above options?
Any other suggestions outside of the options I've thought of would also be appreciated
Options:
Directly query the database from the application.
Upside:
Never any stale data
Downside:
Your application now contains specific code and is tied that
application
If you are in the common situation where the business
buys another application containing the same master data, you now
need special code to connect to two applications
Vendor might not like it
Might be performance impacts on source application
Use Windows Task Scheduler / SQL Agent to run a script or SSIS to replicate data at x minute intervals or so.
Upside:
Your application is only tied to your local copy of the database, which you can customise as required. If your source app gets moved to the cloud or something then you don't need to make application changes, just integration changes
If another source application appears with the same type of master data, you can now replicate that into your local DB rather than making application changes to connect to 2 databases.
Downside:
Possibility of stale data
Even worse: possibility of stale data without users realising it, with subsequent loss of confidence in the application
Another component to maintain
If you write a batch script, .Net app or SSIS, they are all pieces of logic that needs to be scheduled to run
Another option is to replicate the database using differential replication if your source database is Oracle or SQL, you can use replication to replicate it into another database.
You need to consider where you will be in a few years. The data copy method probably gives you more flexibility to adapt to changes in the source system as you only need to change your integration, not your whole app if something drastic changes with your source system.
You also need to consider: will you ever be asked to propogate changes back the other way, i.e. update data in your local copy and have it pushed back to the source systems.

Developing with a Split Frontend/Backend MS Access Database from the start

I am starting to create an MSAccess database, I have no Access experience - my previous experience is with MySQL and Oracle. Initially I had some difficulty coming to terms with the fact that MSAccess usually stores both the front end application and the Jet Engine database in the same file. It's different from what I'm used to. Plus the database will be shared over a network, and it just makes more sense to split the application from the data.
After some reading, I see that it is possible to store data in one file, and then link to the application elements in another file. Every article I've come across for this deals with splitting the database into two parts, after the database has already been made, and never discusses creating split database applications from the start. Is it because that would be a bad idea? I can't really imagine why, except that I've noticed that Access does not let me keep two database files open at the same time (it automatically closes one). So I am foreseeing a need to constantly to open and re-open files if I go down that route.
There is one practical reason why you might want to start with a single database. If you start with a front and back end file, you'll have to create tables in one database, then set up the link for each table manually.
This is not a big deal, but if you're just starting the system, you can save some busywork by developing the pilot system in one file, then splitting it. My assumption is you'll probably be making a lot of changes to the data structure at the outset; your work will go smoother if you're working in one file.
It is definitely a good idea to split the database before you deploy it to production. I'm not sure why you're having problems opening 2 Access files at once; this is not a restriction of Access.
You can create the two db files separately at the outset. I do that often. I seldom need both open at the same time in the Access interface. I only open the back-end database, which houses the tables, indexes, and relationships, to modify the design of those db objects. And those types of changes are relatively infrequent; most of the development workload is for the front-end db. To modify data in the tables, you can use the table links from the front-end db.
It is not a bad idea. You can have two files open at the same time, either open another Access instance or launch by double-clicking the second file. Make sure you have created a suitable back-end design before you start on the front-end.
It is more efficient to have it all in one file while you're alone to work on it. Once the database design is finalised, then you can split the db.
Splitting the db is usefull during testing as well: it allows you to reset your data to a known state in about 5 sec, just by copying a saved version of the back-end.

How to set up a development environment in MS Access

I have created an MS Access 2003 application, set up as a split front-end/back-end configuration, with a user group of about five people. The front end .mdb sits on a network file server, and it contains all the queries, forms, reports, and VBA code, plus links to all the tables in the back end .mdb and some links to ODBC data sources like an AS/400. The back end sits on the same network file server, and it just has the table data in it.
This was working well until I "went live" and my handful of users started coming up with enhancement requests, bug reports, etc. I have been rolling out new code by developing/testing in my own copy of the front-end .mdb in another network folder (which is linked to the same back-end .mdb), then posting my completed file in a "come-and-get-it" folder, alerting the users, and they go copy/paste the new front-end file to their own folders on the network. This way, each user can update their front end when they're at a 'stopping point' without having to boot everyone out at once.
I've found that when I'm developing now, sometimes Access becomes extremely slow. Like, when I am developing a form and attempt to click a drop-down on the properties box, the drop-down arrow will push in, but it will take a few seconds before the list of options appears. Or there's tons of lag in selecting & moving controls on a form. Or lots of keyboard lag.
Then, at other times, there's no lag at all.
I'm wondering if it's because I'm linked to the same back end as the other users. I did make a reasonable effort to set up the queries, forms, reports etc. with minimal record locking, if any at all, depending on the need. But I may have missed something, or perhaps there is some other performance issue I need to address.
But I'm wondering if there is an even better way for me to set up my own development back-end .mdb, so I can be testing my code on "safe" data instead of the same live data as the rest of the users. I'm afraid that it's only a matter of time before I corrupt some data, probably at the worst possible moment.
Obviously, I could just set up a separate back-end .mdb and manually reconfigure the table links in the front end every time, using the Linked Table Manager. But I'm hoping there is a more elegant solution than that.
And I'm wondering if there are any other performance issues I should be considering in this multi-user, split database configuration.
EDIT: I should have added that I'm stuck with MS Access (not MS-SQL or any other "real" back end); for more details see my comment to this post.
If all your users are sharing the front end, that's THE WRONG CONFIGURATION.
Each user should have an individual copy of the front end. Sharing a front end is guaranteed to lead to frequent corruption of the shared front end, as well as odd corruptions of forms and modules in the front end.
It's not clear to me how you could be developing in the same copy of the front end that the end users are using, since starting with A2000, that is prohibited (because of the "monolithic save model," where the entire VBA project is stored in a single BLOB field in a single record in one of the system tables).
I really don't think the problems are caused by using the production data (though it's likely not a good idea to develop against production data, as others have said). I think they are caused by poor coding practices and lack of maintainance of your front end code.
turn off COMPILE ON DEMAND in the VBE options.
make sure you require OPTION EXPLICIT.
compile your code frequently, after every few lines of code -- to make this easy, add the COMPILE button to your VBE toolbar (while I'm at it, I also add the CALL STACK button).
periodically make a backup of your front end and decompile and recompile the code. This is accomplished by launching Access with the /decompile switch, opening your front end, closing Access, opening your front end with Access (with the SHIFT key held down to bypass the startup code), then compacting the decompiled front end (with the SHIFT key held down), then compiling the whole project and compacting one last time. You should do this before any major code release.
A few other thoughts:
you don't say if it's a Windows server. Linux servers accessed over SAMBA have exhibited problems in the past (though some people swear by them and say they're vastly faster than Windows servers), and historically Novell servers have needed to have settings tweaked to enable Jet files to be reliably edited. There are also some settings (like OPLOCKS) that can be adjusted on a Windows server to make things work better.
store your Jet MDBs in shares with short paths. \Server\Data\MyProject\MyReallyLongFolderName\Access\Databases\ is going to be much slower reading data than \Server\Databases. This really makes a huge difference.
linked tables store metadata that can become outdated. There are two easy steps and one drastic one to be taken to fix it. First, compact the back end, and then compact the front end. That's the easy one. If that doesn't help, completely delete the links and recreate them from scratch.
you might also consider distributing an MDE to your end users instead of an MDB, as it cannot uncompile (which an MDB can).
see Tony Toews's Performance FAQ for other generalized performance information.
1) Relink Access tables from code
http://www.mvps.org/access/tables/tbl0009.htm
Once I'm ready to publish a new MDE to the users I relink the tables, make the MDE and copy the MDE to the server.
2) I specifically created the free Auto FE Updater utility so that I could make changes to the FE MDE as often as I wanted and be quite confident that the next time someone went to run the app that it would pull in the latest version. For more info on the errors or the Auto FE Updater utility see the free Auto FE Updater utility at http://www.granite.ab.ca/access/autofe.htm at my website to keep the FE on each PC up to date.
3) Now when working on site at a clients I make the updates to the table structure after hours when everyone is out of the system. See HOW TO: Detect User Idle Time or Inactivity in Access 2000 (Q210297) http://support.microsoft.com/?kbid=210297 ACC: How to Detect User Idle Time or Inactivity (Q128814) http://support.microsoft.com/?kbid=128814
However we found that the code which runs on the timer event must be disabled for the programmers. Otherwise weird things start happening when you're editing code.
Also print preview would sometimes not allow the users to run a menu item to export the report to Excel or others. So you had to right click on the Previewed report to get some type of internal focus back on the report so they could then export it. This was also helped by extending the timer to five minutes.
The downside to extending the timer to five minutes was if a person stays in the same form and at the same control for considerable parts of the day, ie someone doing the same inquiries, the routine didn't realize that they had actually done something. I'll be putting in some logic sometime to reset this timer whenever they do something in the program.
4) In reference to another person commenting about scripts and such to update the schema see Compare'Em http://home.gci.net/~mike-noel/CompareEM-LITE/CompareEM.htm. While it has its quirks it does create the VBA code to update tables, fields, indexes and relationships.
Use VBA to unlink and re-link your tables to the new target when switching from dev to prod. It's been to many years for me to remember the syntax--I just know the function was simple to write.
Or use MS-Access to talk to MS-Access through ODBC, or some other data connection that lives outside of the client mdb.
As with all file base databases, you will eventually run into problems with peak usage or when you go over a small magical number somewhere between 2 and 30.
Also, Access tends to corrupt frequently, so backup, compact and repair need to be done on an frequent basis. 3rd party tools used to exist to automate this task.
As far as performance goes, the data is being processed client side, so you might want to use something like netmeter to watch how much data is going over the wire. The same principle about indexing and avoiding table scans apply to file base dbs as well.
Many good suggestions from other people. Here's my 2 millicents worth. My backend data is on server accessed through a Drive mapping. In my case, the Y drive. Production users get the mapping through a login script using active directory. Then the following scenarios are easily done by batch file:
Develop against local computer by doing a subst command in a batch file
run reports against last nights data by pointing Y to the backup server (read only)
run reports against end of month data by pointing to the right directory
test against specialized scenarios by keeping a special directory
In my environment (average 5 simultaneous users, 1000's of rows, not 10,000's.) corruption has occurred, but it's rare and manageable. Only once in the last several years have we resorted to the previous days backup. We use SQL Server for our higher volume stuff, but it's not as convenient to develop against, probably because we don't have a SQL admin on site.
You might also find some of the answers to this question (how to extract schemas from access) to be useful as well. Once you've extracted a schema using one of the techniques that were suggested you gain a whole range of new options like the ability to use source control on the schemas, as well as being able to easily build "clean" testing environments.
Edit to respond to comment:
There's no easy way to source control an Access database in it's native format, but schema files are just text files like any other. Hence, you can check them in and out of the source control software of your choice for easy version control/rollbacks.
Or course, it relies on you having a series of scripts set up to re-build your database from the schema. Once you do, it's normally fairly trivial to create an option/alternative version that rebuilds it in a different location, allowing you to build test environments from any previous committed version of the schema. I hope that clarifies a bit!
If you want to update the back end MDB schema automatically when you release a new FE to the clients then see Compare'Em http://home.gci.net/~mike-noel/CompareEM-LITE/CompareEM.htm will happily generate the VBA code need to recreate an MDB. Or the code to create the differences between two MDBs so you can do a version upgrade of the already existing BE MDB. It's a bit quirky but works.
I use it all the time.
You need to understand that a shared mdb file for the data is not a robust solution. Microsoft would suggest that SQL Server or some other server based database would be a far better solution and would allow you to use the same access front end. The migration wizard would help you make the changeover if you wanted to go that way.
As another uses pointed out, corruption will occur. It is simply a question of how often, not if.
To understand the performance issues you need to understand that to the server the mdb file with the data in it is simply that, a file. Since no code runs on the server, the server does not understand transactions, record locking etc. It simply knows that there is a file that a bunch of people are trying to read and write simultaniously.
With a database system such as SQL Server, Oracle, DB2. MySQL etc. the database program runs on the server and looks to the server like a single program accessing the database file. It is the database program (running on the server) that handles record locking, transactions, concurrency, logging, data backup/recovery and all the other nice things one wants from a database.
Since a database program designed to run on the server is designed to do that and only that, it can do it far better and more efficently that a program like Access reading an writing a shared file (mdb).
There are two rules for developing against live data
The first rule is . . . never develop
against live data. Not ever.
The second rule is . . .never develop
against live data. Not ever.
You can programatically change the bindings for linked tables, so you can write a macro to change your links when you're deploying a new version.
The application is slow because it's MS Access, and it doesn't like many concurrent users (where many is any number > 1).