We have an issue we are seeing where a table seems to be getting reset, the records are deleted and the primary key is reset. This then causes issues with another table that joins to it. I have a theory it might be to do with the connection dropping out when trying to retrieve info from the data context and then submitting a blank/default table. It looks like the data context is being reused rather than creating a new one for each unit of work which I think is the wrong thing to do but i was wondering if this would be the cause of the issue?
when i used local database, i faced same problem. in every run,
visual studio adds main database files into the debug folder and overwrites.
so i see blank database after run. but there is no problem,
because after releasing the solution it happenings not same, and problem disappears.
Related
I want to know, is there a log that saves all the changes made by users to a database just like in git where all the commits can be viewed by each user so that if any error occur we know who did it.If there is one how to get it? or how to create one that will do it?
Problem I'm facing is that a table's data has been altered by one of the developers but there is no way to find when and who did it and also I am struggling to find all the changes that has been done to the table.
Databases typically do not provide auditing as standard. I typically implement it within my application. However, for a faster result, there are some plugins for mySQL which you could try.
PROBLEM
I am developing an app where the data model will be very similar to JSFiddle's. A user will create a new entry that will be assigned a GUID in the database. My question is how to handle when other users want to modify/fork/version the original entry. JSFiddle handles this by versioning the entry (so the URL becomes something like jsfiddle.net/GUID/1).
What is the benefit to JSFiddle's method over assigning a new GUID to the modified version and just recording a relationship to the original entry in the database?
It seems like no matter what I will have to create a new entry in the database that will essentially be a modified copy of the original.
Also, there will be both registered and anonymous users just like JSFiddle. The registered users should be able to log in and see all of their own entries and possibly the versions/forks that exist off of their own entries (though this isn't currently a requirement).
Am I missing something? Is there a right and wrong way to do this?
TECH
Using parse.com's RESTful API for data CRUD; node on the server.
What is the benefit to JSFiddle's method over assigning a new GUID to the modified version and just recording a relationship to the original entry in the database?
I would imagine none, both would require the same copy operation and the same double query (in MongoDB) to get the parent.
The only difference is what field you go by.
Am I missing something?
Not that I can see.
Is there a right and wrong way to do this?
It seems as though you have this pretty well covered frankly.
MVCC does seem the right way to do this in some respects, however you don't have to go the full hog. If you were there might be cause for you to change to a database that has it built in like CouchDB or something because MongoDBs implementation would be on top of its current existing lock mechanisms, its like adding a lock on a lock.
I am completely perplexed.
A colleague's got a database issue. I noticed that the (internal) software that created the local database file with the problem, uses programmatic access to MS JET, which meant an easy first step was to see if MS Access (2010) was happy with the database - and then fix, export/import or repair, as a first step.
I copied the stand-alone local Jet data file to a non-networked virtual machine (so no chance of external data), and MS Access opened the db file easily, but I can't make sense of what I'm seeing.
MS Access is configured on that system to show all hidden and system objects, confirmed since the Access system tables in the file are all visible and can be opened. These are my observations:
The object browser lists the usual MS system tables, and a bunch of SELECT queries (which look correct) of the form SELECT (FIELD LIST) FROM (OTHERTABLENAME) WHERE (FIELDNAME=VALUE), nothing more.
The select queries show the usual grid with valid data records when opened, and the data looks correct as well.
No data tables with the given names are showing in the object browser interface.
The given names are listed as objects of the database, in the system table MSysObjects.
So..... the underlying data tables ARE named in MSysObjects, and seem to be true data tables... but they are NOT being listed in the object browser and I can't figure how to open their datasheets (although MS Access' system tables are, and "Show hidden/system" are both enabled)... and the tables surely do exist in the file since an apparent SELECT query is pulling their data from them, and the file is on a clean non-networked machine with no other sources reachable.
Any ideas? I want to check the underlying data but ... whats going on?
When I examined your database, I discovered the reason you can't access the tables normally is because the authors of the internal application which created the db file implemented measures to prevent normal access.
I advise you to contact them and your managers to get authorization and assistance to view the data.
Also, please be cautious with this question. A suspicious person might uncharitably interpret your question as a disguised request for hacking help. Please note I am not accusing you of anything underhanded ... simply asking you to notice how your question might be perceived. And, if that were to happen, I don't know what the consequences would be on Stack Overflow, but I can't imagine it would be good. So please be careful.
I have been working on a huge ETL project with 150+ tables and during the design I had to make a major change on destination column names and data types for a couple of tables.
My problem is that I can't get SSIS to see the new schema for the tables I changed. So I would like to know how can I get SSIS to refresh this schema? I find it kind of ridiculous that there no way to tell SSIS to update the metadata from database schema, especially for database migration.
Recreating the project from scratch is out of question because I already spent some hours on it. Also changing manually the 400+ columns I changed is also not an option.
What about using the Advanced Editor and pressing the Refresh button on the left side below?
Following my previous auto-answer, I finally found what was preventing the metadata from being refreshed.
When I originally modified my database, I actually executed another script that was making a DROP on the table and then a CREATE TABLE to recreate the table from scratch. There, SSIS was never able to detect changes and I had to do all the things in my other answer.
Later today I had to make some minor modification and this time I opted for an ALTER TABLE. Magically, this time SSIS detected all the changes even notifying me to refresh columns from the advanced editor, which worked fine.
So basically all these issues has been caused by my poor knowledge about DBA and its best practices.
I found a way to fix it but that was a bit tricky.
Even thought I was completely removing any references of the table from my packages, I was always getting the old metadata.
I still don't have a clear fix but here is what I did to fix it:
Removed any reference to concerned source and destination tables
Deleted obj and bin folder from the project folder
Saved, closed and then reopened the project
Created new data flow from scratch and updated metadata was finally there
Don't know where those informations were cached but I suspect that the obj folder keeps a cached copy of your packages or that Visual Studio keeps metadata in memory which is freed when you close it. Anyway, following these steps should fix it.
Problem and about the database: Data from a record in Access 2003 database has disappeared. This database has 1 backend and 3 frontends, multiple users and is hosted on Citrix. Within this database, we have records of all clients served, ranging in the 1000s.
Background info: The form for client data entry is set up with various subforms, including both a "programs enrolled" subform and a "services" subform. A client can be enrolled in multiple programs. Once enrolled in a program, services can be entered for that program area using the services subform. There are multiple fields in the services subform, one of which is a drop-down field allowing you to choose from the programs a client has been enrolled in (the list is updated for that client whenever he is enrolled in a new program).
The problem details: For one specific record and one specific program area, the program has disappeared from the "programs enrolled" subform and all of the related services have disappeared from the "services" subform for a period of 3 months of data entry. However, other programs and services for this record did not disappear.
Questions: Is the disappearance of data a common Access 2003 problem? Are there tests in place that can be run to see if data is disappearing and catch that data? If so, what are they? If there is specific code involved, what is it? What can be done to prevent the disappearing of data (other than using a different database)?
As #HansUp says, this is not a common issue. Two things spring to mind:
Jet/ACE files do not like to be stored on file servers with replicated file systems unless the file is edited on only one side of the replication synch. That is, if two servers have a replicated volume and you have people connecting to both servers and trying to edit both copies of the database, you'll hose the data. If you're editing on only one side, there shouldn't be any issues, but I worry about this kind of thing. Another issue might be virtualization, though I don't have any definite scenario where that could be a problem.
More than 10 years ago I saw an issue that combined the old bookmark bug with On Error Resume Next that caused data to not be saved. What was happening was that turning off error reporting/handling with the On Error Resume Next was not properly going out of scope, and errors that were occuring in the departure from a record via bookmark navigation were never being reported. The result was that edits were lost. When I changed the bookmark navigation to save the record if it was dirty before changing the bookmark pointer, the problem went away. But while I was at it, I eliminated as many On Error Resume Next statements as possible.
Another guise of the second problem would be if DoCmd.SetWarnings is set to False. I never bother with SetWarnings, so it's not an issue, but it's a common novice technique, and worth looking at. The idea is that errors are happening but the report of them is not getting to the users, and thus, the edits are being lost.
I don't consider either of these very likely, but your situation is so uncommon that even unusual things like these are worth looking at.
Is the disappearance of data a common Access 2003 problem?
No. Data in any version of Access does not just disappear without cause. Unless perhaps your database file has been corrupted. More likely you have a design error in your form and/or database schema.
If you have a form/subform based on parent and child tables, make sure you have a relationship established and that you enforce referential integrity on that relationship. That is a standard practice to ensure you can't delete a parent record while you still have at least one related record in the child table.
Based on your description, it's not clear whether the data is missing from your data tables or just not appearing in your subform when you expect to see it. You might want to clarify that point.
At this point, we have determined that the missing data occurs at random and is not traceable to a specific pattern which makes it rather difficult to pinpoint a solution. We are still uncertain if the data actually disappeared or if the user assumed he had entered the data but had not. Unfortunately, our db host does not keep a series of backups that we could trace back to, but rather updates the backup at the end of each day--again raising a problem for tracing the disappearance. We will continue to investigate the various answers provided here and are grateful for your great input and suggestions.