Access: How to get most recent datetime of table data changes - ms-access

I have inherited a legacy Access app that has a TON of code running before the main form comes up. This code imports data, deletes data, and changes data a million ways.
Is there a way where (after the startup stuff is finished) I can list the tables and when each of them had data affected most recently?
Thanks!

Sorry, but I'm afraid the answer to your question is simply: No.
Here's an idea though:
Make a backup of the database file.
Open the app so it runs the code you are concerned about.
Compare the DB to the backup using a tool like Red Gate SQL Data compare.
BTW: I'm not sure if the RedGate tool works against access databases, but FMSInc. has one that claims to.

Related

Moving data from local MSSQL server to a remote MYSQL DB

I am using SSIS to move data between a local MSSQL server table to a remote MYSQL table (Data flow, OLEdb source and ODBC Destination). this works fine if im only moving 2 lines of data, but is very slow when using the table I want which has 5000 rows that fits into a csv of about 3mb, this currently takes about 3 minutes using ssis's options, however performing the steps below can be done in 5 seconds max).
I can export the data to a csv file copy it to the remote server then run a script to import straight to the DB, but this requires a lot more steps that I would like as I have multiple tables I wish to perform the steps on.
I have tried row by row and batch processing but both are very slow in comparison.
I know I can use the above steps but I like using the SSIS GUI and would have thought there was a better way of tackling this.
I have googled away multiple times but have not found anything that fits the bill so am calling on external opinions.
I understand SSIS has its limitations but I would hope there is is a better and faster way of achieving what I am trying to do. If SSIS is so bad I may as well just rewrite everything into a script and be done with it, But I like the look and feel of the Gui and would like to move my data in this nice friendly way of seeing things happen.
any suggestions or opinions would be appreciated.
thank you for your time.
As above have tried ssis options including a 3rd party option cozyroc but that sent some data with errors (delimiting on columns seemed off) now and again, different amount of rows being copied and enough problems to make me not trust the data.

SELECT query is pulling data, source table appears in MSysObjects, Hidden/System are both enabled, but table isn't listed in object browser

I am completely perplexed.
A colleague's got a database issue. I noticed that the (internal) software that created the local database file with the problem, uses programmatic access to MS JET, which meant an easy first step was to see if MS Access (2010) was happy with the database - and then fix, export/import or repair, as a first step.
I copied the stand-alone local Jet data file to a non-networked virtual machine (so no chance of external data), and MS Access opened the db file easily, but I can't make sense of what I'm seeing.
MS Access is configured on that system to show all hidden and system objects, confirmed since the Access system tables in the file are all visible and can be opened. These are my observations:
The object browser lists the usual MS system tables, and a bunch of SELECT queries (which look correct) of the form SELECT (FIELD LIST) FROM (OTHERTABLENAME) WHERE (FIELDNAME=VALUE), nothing more.
The select queries show the usual grid with valid data records when opened, and the data looks correct as well.
No data tables with the given names are showing in the object browser interface.
The given names are listed as objects of the database, in the system table MSysObjects.
So..... the underlying data tables ARE named in MSysObjects, and seem to be true data tables... but they are NOT being listed in the object browser and I can't figure how to open their datasheets (although MS Access' system tables are, and "Show hidden/system" are both enabled)... and the tables surely do exist in the file since an apparent SELECT query is pulling their data from them, and the file is on a clean non-networked machine with no other sources reachable.
Any ideas? I want to check the underlying data but ... whats going on?
When I examined your database, I discovered the reason you can't access the tables normally is because the authors of the internal application which created the db file implemented measures to prevent normal access.
I advise you to contact them and your managers to get authorization and assistance to view the data.
Also, please be cautious with this question. A suspicious person might uncharitably interpret your question as a disguised request for hacking help. Please note I am not accusing you of anything underhanded ... simply asking you to notice how your question might be perceived. And, if that were to happen, I don't know what the consequences would be on Stack Overflow, but I can't imagine it would be good. So please be careful.

Migration strategies for SQL 2000 to SQL 2008

I've perused the threads here on migration from SQL 2000 to SQL 2008 but haven't really run into my question, so here we go with another one.
I'm building a strategy to move specific SQL 2000 databases to a new SQL 2008 R2 instance. My question comes with regards to the best method for transferring the schema and data. One way I know of is to do the quick 'n' dirty detach - copy - attach method, which should work so long as I've done my homework wrt compatibility and code and such.
What if, though, I wrote the schema and logins via script and then copied the data via SSIS? I'm thinking of trying that so I can more easily integrate some of my test cases into the package (error handling and whatnot). What would I be setting myself up for if I did this?
Since you are moving the data between servers or instances, I would recommend moving the data via data flows. If you don't expect to run the code more than once, then you can let the wizard generate your code for this move. However, when I did this once 2+ years ago, the wizard code generated combined execute sql tasks that combined many "create table" commands into one task and created a few data flow tasks that had multiple source and destinations in them to insert data in the destination. This was good to get up and running, but it was inadequate when I wanted to refresh the tables one more time after I modified the schema of the new target tables. If you expect to run the refresh more than once, then you may want to take the time to create the target schema first and then manually create the data flows.
Once you have moved the data, then you can enable full-text search on the new server. I don't believe you will need to have this enabled on your first load.
One reason I recommend against the detach-attach method for migration is that you bring all the dirty laundry from the 2000 database to the 2008 R2 database. If you had too lax security on the 2000 server or many ancient users that shouldn't exist, it could be easier to clean this up by starting from scratch. If you use the detach-attach method, then you have to worry about users.

MySQL database sync for workstation developement/testing

I need a local copy of our production database, and I need to refresh it every few days so testing and development is not working with terribly stale data. A few days old is just fine. Here is the pseudo plan:
Write a script on the Production server that mysqldump's + gzip the database.
Add a cron process to run the script every other day during non-peak hours.
Write a script on the workstation that rsync's that gzipped dump and loads it up.
Is there any better, cleaner, or safer way of doing this?
EDIT: Just to add clarity. We still have in place Test Data that is known, along with our test library (test driven development). Once THOSE tests pass, its on to the (more) real stuff.
You may wish to consider MySQL replication. It isn't a thing to be trifled with but may be what you are looking for. More information here... http://dev.mysql.com/doc/refman/5.0/en/replication-features.html (I don't personally know anything about it other than that it can be done).
Testing should be working with "known" data; not production data. You should have scripts to load "Test" data into the system to achieve this. Test/Dev shouldn't have to deal with a moving target of data. Besides, if you have any sensitive data in production (doesn't everyone"); your dev/test teams shouldn't have access to it.
Some suggestions for creating test data:
1) Excel spreadsheets with VBA behind them to create sql to run against the DB
2) Raw sql scripts
3) Data creation programs that generate data in a known pattern.

How to synchronize development and production database

Do you know any applications to synchronize two databases - during development sometimes it's required to add one or two table rows or new table or column.
Usually I write every sql statement in some file and during uploading path I evecute those lines on my production database (earlier backing it up).
I work with mySQL and postreSQL databases.
What is your practise and what applications helps you in that.
You asked for a tool or application answer, but what you really need is a a process answer. The underlying theme here is that you should be versioning your database DDL (and DML, when needed) and providing change scripts to be able to update any version of your database to a higher version.
This set of links provided by Jeff Atwood and written by K. Scott Allen explain in detail what this ought to look like - and they do it better than I can possibly write up here: http://www.codinghorror.com/blog/2008/02/get-your-database-under-version-control.html
For PostgreSQL you could use Another PostgreSQL Diff Tool . It can diff two SQL Dumps very fast (a few seconds on a db with about 300 tables, 50 views and 500 stored procedures). So you can find your changes easily and get a sql diff which you can execute.
From the APGDiff Page:
Another PostgreSQL Diff Tool is simple PostgreSQL diff tool that is useful for schema upgrades. The tool compares two schema dump files and creates output file that is (after some hand-made modifications) suitable for upgrade of old schema.
Have scripts (under source control of course) that you only ever add to the bottom off. That combined with regular restores from your production database to dev you should be golden. If you are strict about it, this works very well.
Otherwise I know lots of people use redgate stuff for SQLServer.
Another vote for RedGate SQL Compare
http://www.red-gate.com/products/SQL_Compare/index.htm
Wouldn't want to live without it!
Edit: Sorry, it seems this is only for SQL Server. Still - if any SQL Server users have the same question I'd definitely recommend this tool.
If you write your SQL statements for your development database (which are, I imagine, series of DDL instructions such as CREATE, ALTER and DROP), why don't you keep track of them by recording them in a table, with a "version" index? You will then be able to:
track your version changes
make a small routine allowing the "automatic" update of your production database by sending the recorded instructions to the database.
I really like the EMS tools.
There tools are available for all popular DB's and you have the same user experience for every type of DB.
One of the tools is the DB Comparer.
TOAD
saved many an ass several times in the past. Why do people run sql with no exit strategy?
the redgate one is good also.
Siebel (CRM, Sales, etc. management product) has a built-in tool to align the production database with the development one (dev2prod).
Otherwise, you've got to stick with manually executed scripts.
Navicat has a structure synchronisation wizard that handles this.
I solve this by using Hibernate. It can detect and autocreate missing tables, columns, etc.
You could add some automation to your current way of doing things by using dbDeploy or a similar script. This will allow you to keep track of your schema changes and to upgrade/rollback your schema as you see fit.
Here's a straight linux bash script I wrote for syncing Magento databases... but you can easily modify it for other uses :)
http://markshust.com/2011/09/08/syncing-magento-instance-production-development
DBV - "Database version control, made easy!" (PHP)