I have looked into the references and do not see any libraries with "MISSING" at the end (see image below).
I unchecked and checked the references above that are in use, and that didn't change anything. I then tried to compile the database, and that didn't resolve the issue. I also tried creating a new database and importing all tables, queries, etc. but ran into an error that states "one or more of the newly created objects contain a data type that isn't compatible with earlier versions of access." Is it possible that this is due to updating to datetime2 from datetime? I believe I accepted that change when creating date fields in the Microsoft SQL server management studio.
update: now it appears any criteria associated with using dates is not working. I can't even run a query with the following criteria on a date field:
Between [start_date] And [end_date]
Related
I have spent two days researching and try to fix my issue with access form edits. I understand that there may be similar questions out there, but none of the suggestions fixed my problem. Also, my situation might be slightly different.
I'm on Access 2017 and using an access split form that is tied to a linked table that is on sql server 2017. I have an add button that simply adds the record entered and moves to a new record. When I add a record to my form and then try to edit it in the datasheet view on my split form I get a write conflict error.
I've already validated that I have a primary key on my table and that there are no null bit fields.
The other thing to note is that this started happening after migrating from SQL server 2014 to sql server 2017.
One thing I read about and have yet to try because of the "drastic" change it entails is to set the compatibility level of my database to something lower like SQL 2014. This would be a last resort however and would only be to validate what the cause of the error might be.
I've tried everything on this page that is applicable to my situation: http://www.accessrepairnrecovery.com/blog/fix-ms-access-write-conflict-error
What else can I try to resolve this? I'm hoping someone out there has run into something similar.
First this question has been answered 100's of time on stack overflow.
Next up: Your link has nothing to do with using SQL server, so the suggests likely will not help.
The main causes (repeated over and over as solution) when using Access and SQL server are:
Ensure that all tables have a PK defined.
Ensure that any bit fields have a default setup on sql server (usually 0)
Ensure that each table has a timestamp field.
This is important, espeically if you have any floating or "real" data type columns. The Access up-sizing wizard, and the migration tool for Access both by default suggest and will add the timestamp field.
If you missing any of the above 3 issues (that have been repeated over and over for the last 18 years on near every article about using SQL server.
So, you will ensure that you checked above all 3 issues.
After any table changes, you will re-link the access client side.
You then need to test/check if you can change edit data using the linked table directly from access (in table view). if you can edit such data directly, then you are back to testing with your form. If the form still causes a write conflict, then suggests in the article you linked to will START to apply, but not until such time you address and ensure all 3 above steps are issues are dealt with.
The time stamp is often required for a sub form, and also when you have real/floating columns. Due to rounding errors in such computer numbers, then the compare between the two records fail. The adding of the timestamp column fixes this issue since access now does not have to do a field by field compare, but will use the timestamp column (not to be confused with a datetime column) to figure out if record has been changed. Thus adopting this feature even reduces the network chatter from client to server and allows access to determine if server record been changed without having to resort to a field by field compare.
I recently encountered the same error and it turned out to be that I had an active sort on the datasheet view. Once I removed the sort, voila, problem solved! (Nothing like shooting myself in the foot.)
Background:
Legacy code running in MS Access 2003.
Sqls run by CurrentDB in Access.
Currently running on Windows 7 32-bit machine.
Connecting to MySQL Server 5.5 through a ODBC 5.1 Driver.
Problem:
Trying to migrate to Windows server 2012 64 bit.
ODBC 5.3 Unicode Driver (32 bit).
Don't want to use time rewriting everything as there is a lot of code, and it will in a not to distant future be removed.
Issue:
Several sql statements fail when running on the new servers. Worked on old servers.
All the failed sqls have now() in the statement.
Error description says ODBC call failed. While the more detailed description says date overflow - "[MySQL][ODBC 5.3(w) Driver][mysqlid-5.5.28-log]Date overflow".
Happens randomly, and when it happens and Access stops one can usually just choose continue and the sql will then work. It fails less than 1% of the times it runs (of thousands).
The only dates in the sql are in the where clause: "and fieldA > now()", where fieldA is a datetime column. This is a get recordset sql. Another error during an insert is the same, but where an integer was subtracted from now() before compared to a datetime.
I don't understand why it says date overflow when there doesn't seem to be a reason for the time of either datetime or "now" to be removed? And since the datetime field is already in the database and now() will get current date and time there shouldn't be any invalid dates?
Any help in what the problem might be or how to debug/log anything that might help would be highly appreciated.
Turning trace on in the ODBC driver is not an option because it happens so randomly, there is so much traffic therefore this would slow down everything such that nothing happens.
Note that I also did encounter one sql where the date overflow error message was correct. It seems that prior to 5.3 when inserting a datetime into a date field it was automatically truncated, because a sql which had been successfull 3000 times started failing. Therefore this sql has been fixed by extracting the date from the field first. But the other errors must be something different.
New version containing a bugfix has been released by Oracle: 5.3.8
This error was a bug that seems to have been introduced in version 5.1.11.
In advanced options there is now a Date overflow check box which has to be ticked for the code to continue when there is an error.
Reply from Oracle about the fix:
"For your information the fix approach was that in C or C++ it is possible to read or write DATE type using SQL_TIMESTAMP_STRUCT.
This struct can hold both date and time.
The error (Date overflow) was generated when with operations that are supposed to be DATE-only this struct got non-zero values for time.
That is the canonical approach as ODBC API requires, however, it causes inconveniences sometimes when for instance the app did not bother
to initialize the whole structure with 0 values because it knows it will only need the DATE part but the random values for TIME fraction could cause the errors despite of being truncated.
A new option was introduced to continue with the query execution rather then return error.
The server will ignore the TIME part and the result is the same as if there were zeroes."
This appears to be an issue with MySQL Connector/ODBC versions 5.2 and later. A web search led to this thread on another site, which in turn led to this unresolved bug report. Note that this is a broader MySQL Connector/ODBC issue; it is not specific to Access applications.
MySQL Connector/ODBC 5.1.13 is still available for download, so your most expedient solution would probably be to simply use that version until the code in question is taken out of service. Your other alternatives might include:
using a newer version of MySQL Server (with [better?] support for fractional seconds), or
tweaking the Access SQL queries to use something other than the Now() function (which you, understandably, would like to avoid).
I had exactly the same problem - "Date overflow" error when saving the data in an Access form.
I changed the data type from "datetime" to "timestamp" in the linked MySQL table and this solved the issue. The datetime data type seems to be too short to accommodate the value of Now() in MS Access.
Remember to refresh the linked tables in Access afterwards.
I also noticed that there was a "Not Null" tick mark next to the ModifyTime timestamp field in the MySQL table. I unticked this.
After these two changes I no longer get the Date Overflow error when Me.Dirty=False executes in Access.
I use Access365 (x64), MySQL ODBC Connector 8.0.31 (x64) and MySQL 5.6
One of our forms keeps generating this error message sporadically. The issue occurs on our Order form, which is bound to a linked SQL Server 2008 table. Having printed an advice note (using a report), the order status is then set to 'Printed Order'. At this point, I'm sporadically seeing a "-2147352567 The data has been changed" error. I would say 95% of the time, this doesn't occur, but it's that other 5% that's causing us headaches (and numerous support calls).
Oddly, closing the form and trying the same action on the order causes the same error message, but closing the database and trying again works fine.
It's as if there are some uncommitted changes to the table/record the form is bound to, which exist even when there are no forms, reports, etc open.
The code looks like this:
Select Case Me.txtCurrentStatus
Case NEW_ORDER, UNPRINTED_ORDER:
Me.txtCurrentStatus = PRINTED_ORDER
End Select
'Commit changes
If Me.Dirty = True Then Me.Dirty = False
#iDevelop actually prompted me to look into adding a timestamp to the table, so I take only partial credit ;)...
In short - when using linked tables in Microsoft Access, if the table does not contain a column of type timestamp, Access will compare every column in the table to see if the data has been changed since the record was retrieved. There are several data types that Access is unable to check reliably (see article below). Simply adding a column of type timestamp changes this behaviour and instead, Access only checks to see if the rowversion has changed... which makes this check more reliable and also improves performance.
Oddly, this isn't really a timestamp at all - it's a rowversion, but in SQL Server 2008, which I am using rowversion isn't available via the GUI.
See https://technet.microsoft.com/en-us/library/bb188204%28v=sql.90%29.aspx.
Probably the leading cause of updatability problems in Office Access–linked tables is that Office Access is unable to verify whether data on the server matches what was last retrieved by the dynaset being updated. If Office Access cannot perform this verification, it assumes that the server row has been modified or deleted by another user and it aborts the update.
There are several types of data that Office Access is unable to check reliably for matching values. These include large object types, such as text, ntext, image, and the varchar(max), nvarchar(max), and varbinary(max) types introduced in SQL Server 2005. In addition, floating-point numeric types, such as real and float, are subject to rounding issues that can make comparisons imprecise, resulting in cancelled updates when the values haven't really changed. Office Access also has trouble updating tables containing bit columns that do not have a default value and that contain null values.
A quick and easy way to remedy these problems is to add a timestamp column to the table on SQL Server. The data in a timestamp column is completely unrelated to the date or time. Instead, it is a binary value that is guaranteed to be unique across the database and to increase automatically every time a new value is assigned to any column in the table. The ANSI standard term for this type of column is rowversion. This term is supported in SQL Server.
Office Access automatically detects when a table contains this type of column and uses it in the WHERE clause of all UPDATE and DELETE statements affecting that table. This is more efficient than verifying that all the other columns still have the same values they had when the dynaset was last refreshed.
I am converting a Large MS Access 2010 application to act as a front end to MySQL 5.5 database, via the v5.1 ODBC Connector, and I am experiencing a strange problem when inserting new records with bound data forms.
In a data-entry form, if the default value of a date field is set in the properties of the control as a constant (such as #04-01-2014#) the new record is created in MySQL successfully, and after saving, all fields are visible in their associated bound fields. But if the default value is defined in the Access control as a function (example: =Date()) then although the MySQL row is created successfully, ODBC fails to find the new row and Access displays all values as #DELETED. Refresh and/or requery commands are not helpful. This is nothing to do with the well known issue of -- must have a primary key and a datestamp in MySQL. All of these safeguards are in place, and as stated, it does work without FE defaults.
No defaults are being set in the MySQL backend, and it makes no difference whether or not Nulls are allowed. Data type used in the backend is DateTime. If I do set defaults in the BE and none in the FE, everything is fine. But that way, the user DOES NOT SEE THE DEFAULTS in the data entry form... an unacceptable situation.
I have also tested both ODBC v5.2, and MySQL 5.6 with exactly the same results.
A solution that seems to work (so far) is to set all defaults in code, in the form.beforeUpdate event. Something like this:
form_beforeUpdate()
if me.newrecord then
field1.value = date()
field2.value = fOtherDate()
'etc
end if
'other code
exit sub
I could insert all new records with unbound forms and passthrough queries. But with 75,000 lines of code, that is a very big job
My question is - why do I need these "workarounds"? Isn't the whole purpose of ODBC to allow fairly normal operation? What is it about simple functions that "breaks it"? What other "ordinary" Access methods will break it? If I don't understand why it didn't work I can't be sure it is properly fixed.
Has anyone else had any experience with this SPECIFIC issue? I could not find any reference to it elsewhere. Thank you for your time.
I am using MS Access 2003 under Windows 7 (64bit), with external linked table at MySQL server (5.0.51a-24+lenny5), connected via MySQL ODBC connector (using 5.1.10, because the newest 5.1.11 is buggy). When I open this table in MS Access and try to delete some records from it, I get following error:
The Microsoft Jet engine stopped the process because you and another
user are attempting to change the same data at the same time.
When I try to edit some records in the table, I get following error:
This record has been changed by another user since you started editing
it. If you save the record, you will overwrite the changes the other
user made.
Copying the changes to the clipboard will let you look at the values
the other user entered, and then paste your changes back in if you
decide to make changes.
However, when I do it via deletion or update query in MS Access, it works fine! I just cannot delete the records directly from the table.
I found out (see the detailed analysis below), that the problem is present when there are double fields with values with a lot of decimal digits. See:
CREATE TABLE `_try4` (
`a` int(11) NOT NULL default '0',
`b` double default NULL,
PRIMARY KEY (`a`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_czech_ci;
insert into _try4 values (1, NULL),(2, 4.532423),(3,10),(4,0),
(5,6.34324),(6, 8.2342398423094823);
The problem is only present when you try to delete/edit the last record (a = 6), otherwise it is OK!
The issue is documented:
http://support.microsoft.com/kb/280730 , proposing these 3 workarounds:
Add a timestamp column to the SQL table. (JET will then use only this field to see if the record has been updated.)
Modify the data type that is in SQL Server to a non-floating point data type (for example, Decimal).
Run an Update Query to update the record. You must do this instead of relying on the recordset update.
However, these 3 workarounds are not satisfactory. Only first could be, but this workaround didn't work - as expected. It probably works only with MS SQL Server.
Are there any other solutions/workarounds for this problem?
Additional details:
The MySQL server is just for me, nobody else is accessing it.
Insertion of new records was working fine.
Primary key is well defined for this table.
Restart of MS Access didn't help.
Deleting the link to the ODBC table and linking it again didn't help either.
Linking the table from brand new Access database didn't help.
Changing the MySQL database engine from MyISAM to InnoDB didn't help either.
There is no problem with permissions, there are all permission for this user#host.
I can normally delete the records from the MySQL console at the server without problem.
Trying to set MySQL Connector ODBC options didn't help: Allow big results, Enable automatic reconnect, Allow multiple statements, Enable dynamic cursors, Force use of forward-only cursors, Don't cache results of forward-only cursors.
I turned on debugging in MySQL ODBC connector, it created myodbc.sql log, but it didn't contain any corresponding queries when editing/deleting (don't know why).
More details about the structure of the linked table would be helpful, but I'll hazard a guess.
I've had a similar problem in both MS Access 2003 and 2010 when I included nullable boolean fields in the SQL Server linked table. Seems JET databases have a problem with nullable nit fields. Check out this answer for more information: https://stackoverflow.com/a/4765810/1428147
I fixed my problem by making boolean fields non-nullable and setting a default value. If your problem is the same as mine, but with MySQL, try doing the same.
I solved here the same issue. The solution was to remove Default Values from decimal fields in the table. I was able to keep decimal data type but just remove the default value I already defined before with 0.0000 and now I set to null and bug fixed.
My workaround was to copy the table data into excel, then use phpadmin to clear the table, then do the editing in excel and copy the 'new' data (ie, all of it, after editing) back to access.