Looking for alternative (band-aid) for Synch .mdb Access Database - ms-access

I've been asked for a quick turn around on this. The group I'm assisting has a .MDB database where offsite workers that don't have internet all the time. Thus, way back the team implemented an Access DB which allows for synchronization.
As their team grew bigger they started running into the following issues:
Remote synching – when an user tries to synch from a worksite, more often than not, the database will crash either due to loss of wireless signal, program timing out, or Inspector manually shutting down due to time (i.e., 30 or more minutes)
Multiple synchers – we are unable to synch multiple at one time (there are currently 34 users in 3 different territories). If someone is synching and another person tries to synch at the same time, the second user will end up with an error message. They will have to shut down their DB and try to synch at a later time.
Incomplete synchs – sometimes when an worker synch’s his/her DB, not all the line items will copy over to the Master file which can cause confusion during review.
Is there any work arounds or items I can look into to resolve these?
I have little resources and time so anything involving a new server might not work.
THanks

It sounds as though you are mainly adding new data from different field operatives, rather than everyone updating existing data, if this is the case then that's good and you could try the following:
Ensure all the tables have "Replication ID's" for the Primary Keys as this will ensure no two operatives create conflicting records.
The synchronisation process should then be amended to take a snapshot of said table/tables to a .txt file on the operatives machine and then this file transferred back to the source machine.
Then at the end of the day or more often if required, the master copy should be setup to import the new data from all the text files it has received, as there will be no conflicting Primary Keys you should be ok, just remember to insert only those where the Primary Key is not already in the table.
Hope all that makes sense : )

Related

how to upload existing database to cpanel?

Often times I use bash scripts to add massive amounts of data to my localhost site databses, once I see that the new data is working properly in my local website I export the database from phpmyadmin and edit the sql file , granted with vim it is realtively easy to change all inserts to insert ignore and so on to prepare it to be accepted in phpmyadmin in cpanel to finaly add the data to my website. this becomes cumbersome when the database gets bigger and bigger
I am new to this and I don't know how to do this operation in a professional/optimal way. is my entire process wrong? how do you do it ?
thank you for your answers
Ah, I think I understand better. I can't answer for any kind of specific enterprise environment, but I'm sure there are many different systems cobbled together with all sorts of creative baler twine and you could get a wide variety of answers to this question.
I'm actually working on a project right now where we're trying to keep data updated between two systems. The incoming data gets imported to a MySQL database and every now and then, new data is exported to a .sql file. Each row has an auto incrementing primary key "id", so we very simply keep track of the last exported ID and start the export from there (using mysqldump and the --where argument). This is quite simple and doesn't feel like an "enterprise-ready" solution, but it's fine for our needs. That avoids the problem of duplicated inserts.
Another solution would be to export the entire database from your development system, then through some series of actions import it to the production server while deleting the old database entirely. This could depend greatly on the size of your data and how much downtime you're willing to perform. An efficient and robust implementation of this would import to a staging database (and verify there were no import problems) before moving the tables to the "correct" database.
If you are simply referring to schema changes or very small amounts of data, then probably version control is your best bet. This is what I do for some of my database schemas; basically you start out with the base schema, then any change gets written as a script that can be run incrementally. So for instance, in an inventory system I might have originally started with a customer table, with fields for ID and name. Later I added a marketing department, and they want me to get email addresses. 2-email.sql would be this line: ALTER TABLE `customer` ADD `email` VARCHAR(255) NOT NULL AFTER `name`;. Still later, if I decide to handle shipping, I'll need to add mailing addresses, so 3-address.sql adds that to the database. Then on the other end, I just run those through a script (bonus points are awarded for using MySQL logic such as "IF NOT EXISTS" so the script can run as many times as needed without error).
Finally, you might benefit from setting up a replication system. Your staging database would automatically send all changes to the production database. Depending on your development process, this can be quite useful or might just get in the way.

MS Access Network Interruption

I have an MS access system on a network with 15 users. The Front end is installed on users C:\ and BE on a mapped drive X:.The front end is about 8 meg, backend around 25.
Since day 1, one user constantly (every 30 mins at best) and some other users have a network interrupted error. Apart from being quite annoying to the users, this causes a temporarily masked/hidden issue where update queries run without error on 2 tables but do not update actually update/insert data.
A compact and repair resolves the issue, but is not feasible to run daily as users have the system open throughout the day. This is such a headache that I've had to write code to check that the data has been written after each query is run to detect if the issue is present.
Both myself and IT are 3rd parties to the business and are in the difficult opposing positions of "its your the network" and "its your database". Thankfully its all calm and peaceful but its not getting a solution for the client.
I've installed MS access FE/BE systems on over a hundred networks over the last 10 years and only ever seen the same issue on a peer to peer network. I'm aware that Access is very picky about network stability, but am faced with users who don't believe that there is a problem with the network as their email works and the internet radio doesn't drop out.
What I'm hoping to get assistance with here is either a tool / method that can test a network for stability / robustness with MS access and prove either one of us right/wrong with MS access or perhaps some advice on how I could move forward on this deadlock.
Thanks
I have seen a similar instance with damaged cables. A client of mine had mice that chewed through part of the cable, causing an intermittent interruption. Also, in another case, a cubicle wall was on top of the network cable (poor cable installation) and causing a short.
In order to bypass Access's need for constant network connection, I have my systems create local temporary tables for any view, and a local, 1-record table for any detail form that they are actively editing. Once they hit 'save' it runs the update query, and once done, no active connection with the server is needed again. It allows me to run much faster access systems, and eliminated the need for stable wireless or Ethernet. It does require quite a bit of structure change at first - as you will have to insert code to create local temporary tables in the FE file, and also code in an UPDATE Sequence in the AFTERUPDATE Form events too - but the time that it is has saved me and my users has been tremendous.
To put in in perspective, i have 1200+ users in the same Access database in a given week (sometimes 400+ in a day) and since they only 'pull' data from the server to make local table copies, there are only a handful of connections at any one time. My users can now dock and undock from their desks without needing to close the database.

What is the best way to prevent Access database bloat

Intro:
I am creating a Access database system that will be rolled out with multi-user functionality.
But as i am creating this database in Access 2000 (Old school I know) there are quite a lot of bugs and random mysterious problems that occur when my database gets passed 40-60MB.
My question:
Has anyone got a good solution to how I can shrink this down or to prevent the bloat?
Details:
I am using many local tables combined with SQL Tables and my front-end links to a back-end SQL Server.
I have already tried compact and repair but it only ever shrinks it to about 15MB and after the user has used the database a few time the bloat expands quickly to over 50-60MB!
Let me know if more detail is needed but that is the rough outline of my problem.
Many Thanks!
Here's some ideas for you to follow.
You said you also have a lot of local tables. Split the local tables off into yet another Access database. So you'll have 2 back-ends (1 SQL Server & 1 Access), and the front end.
Create a batch file that opens your local tables backend database with the /compact option. So, it will look something like this:
"C:\Prog...\Microsoft...\Officexx\ C:\ProjectX_backend.mdb /compact"
Then run this batch file on a daily basis using scheduled tasks. Your frontend should never need compacting unless you edit it in any way.
If you are stuck with 2000, which has a quite bad reputation, then you have to dig down into your application and find out what creates the bloat. The most common reason are bulk inserts followed by deletes. Other reasons, are the use of OLE Object fields. Other reasons are programmatic changes in in form, etc objects. You really have to go through your application and find the specific cause.
An mdb file that is only connected to a backed server and does not make changes to local objects should not grow.
As for your random issues, besides some lack of stability in the 2000 version, you should look into bad RAM in the computers, bad hard drives, and broken network controllers if your mdb file is shared on the network.

Is there a difinitive answer to what causes catastrophic failure in delphi

I have read a few/lots of things on this but they don't seem to help much.
I have an App (it's called "TieUp" but that is irrelevant) I run it manually daily to collate data from several locations.
It is using as sources:
A) Data from a remote SOAP source and loaded into an in-memory TClientDataset via an XMLtransform setup.
B) CSV files downloaded daily and loaded into an in-memory TClientDataset
C) A Mysql Database on the same computer as the program (it's a restored backup of the live source)
D) A remote MS-SQL (SQLServer 2008) database
E) A Mysql Database on a remote server
Data is only read from sources A, B, C and D
Data source E is updated with the consolidated data.
There are between 800 to 2000 records daily so the datasets are not vast although the target (E) has grown to around 150,000 and increasing daily.
I can normally run this all happily and everything works as expected if a little slowly because of all the individual remote lookups to the MS-SQL system) but some days it really screws up and the error is always "Catastrophic Failure!".
The failure does not occur during any particular phase or operation that I can see. The steps are:
1) Get the SOAP(A) data first.
2) Tie in with CSV/In Memory data(B).
3) Lookup References data on Sources C and D to collate
4) Write the consolidated data to source E
After reading in the data into the in memory datasets every thing is In TClientDatasets accesses via DatasetProviders linked to TSQLQueries (they all on the same servers currently but I did it that way to keep some flexibility in future where it might goes true three tier). All queries are contained within the SQLQuery components as they are actually quite simple - it's just a matter of tying things together.
I am using completely standard components from Delphi 2009 Enterprise. All updates and database update packs have been applied. Each data source has its own DataModule these are auto created at startup
There is obviously quite a lot of data access going on here but when it crashes (with catastrophic failure) It gets stuck, completely stuck. Windows can't end the task from the normal "TieUp has stopped working" I have to go to the process and kill it.
There is so much going on and as this only happens once a week or so I really don't know where to start looking.
The reasons for asking the question is twofold: 1) is that I am trying to eliminate any manual stuff and fully automate it, but I can't rely on it if if bombs every week or so. 2) if it happens in the update phase to E - I have to manually delete the new records for the day and start again as I do not have (or haven't written yet) a mechanism to restart from a random point and I would still have to query the DB manually to establish that point for certain.
My next step is to install Delphi on another computer and always run it under the debugger until I can catch it, if it does not freeze first. But that introduces yet another different network connection (instead of the local host one).
So: "Is there a definite answer?" or what is the most likely offending component/connection? Where is the favoured place to start looking?
Thanks in advance...

Problem in Recovering the Lost Data from the Database files (.mdf and .ldf)

Recently, one of our clients has deleted two million rows from a table.
Here the problem is the database is not taken backup. I have only the master data file (.mdf) and Log data file (.ldf) with me.
I have downloaded some demo version of Sql tool, through which i am able to open the .mdf file . When i opened the .mdf file using the Sql tool all the lost data is there in the .mdf file but i cant save or export the view of the Lost data from .mdf file from the tool until i purchase it.
I had followed many steps that has shown in MSDN and various websites to recover but with all a failure. Can any one help me, what is best process to recover the deleted data and put in .ldf file.
One of the Sites which I have referred to recover the data is Recover Lost Data
What is the cost of losing the data? What is the cost over time of losing the data--that is, is it more and more the longer the data remains lost? Compare this cost with the cost of the tool you have found that (apparently) works, and factor in the cost of the time it has and is taking you to find a different solution. It seems likely that, unless they're charging ridiculous amounts of money [can you post the product and cost?], you'd be better off biting the bullet, paying them, and using the product right away, with the (implicit) guarantee of a refund if it doesn't work.
Another option is to get a transaction log reading program that can read and work with data stored in the transaction log... but if you're not doing backups, then your databases are (hopefully!) in Simple recovery mode, and depending on how active your databases are that data may have long-since been dropped from the transaction log. However, all such programs that I've heard about also have licensing fees.
Because yes, recovering deleted data from a SQL Server database is a hard thing to do.