MS Access 2000 (*mdb) default value of Now() sometimes fails? - ms-access

Is there a known failure mode of a MS Access 2000 database, when data is refused (or silently discarded, which is worse) to be inserted into a table containing a Date/Time type of field with default value of =Now() ?
The date/time field in question is not indexed or required;
But when an INSERT query is sent to the database, it looks like the =Now() function fails - and the data is not written to the table (however another auto-increment field lookup is executed, because, when later on =Now() succeeds, there is a gap in auto-increments, equal to the number of times the query was run)
e.g. i see in the table
ID | Data | Timestamp
5 | foo | 11/15/2016 17:15:00
1 | foo | 11/15/2016 17:11:00
when an INSERT INTO TheTable ([Data]) VALUES (foo) is ran every minute and the problem happens on run 2, 3 and 4. Eventually after some time, it succeeds (as shown with ID=5)
Why do I think it could be the =Now() problem ?
Because same/similar failure happens if computer clock is changed /backwards/ (e.g. during DST adjustment).
But it has recently happened just out of the blue, not being able to write data to that table for a couple of hours, when the DST adjustment has actually happened already.
(the program itself is not being told of the query failure and charges forward as if nothing happened - some debugging effort is pending still)
I looked about SO and wonder if this Table Field Default Property Values Functions Not Working Anymore in Microsoft Access 2010 may have something to do with it? However the program and the database communicate via ODBC Microsoft Access (*.mdb) driver (yes MS Office 2000 files...)
Hope this makes sense,
Kind Regards...

Have never heard of such issue, but why not just adjust your SQL:
INSERT INTO TheTable ([Data], [Timestamp]) VALUES (foo, Now())

Related

Sync SQL Binary column to MySQL table

I’m attempting to use a piece of software (Layer2 Cloud Connector) to sync a local SQL table (Sage software) to a remote MySQL database where the data is used reports generated via the company's web app. We are doing this with about 12 tables, and have been doing so for almost two years without any issues.
Background:
I’m using a simple piece of software the uses a SELECT statement to sync records from one table to another using ODBC. In this case from SQL (SQLTable) to MySQL (MySQLTable). To do so, the software requires a SELECT statement for each table, a PK field, and, being ODBC-based, a provider. For SQL I'm using the Actian Zen 4.5, and for MySQL I'm using the MySQL ODBC 5.3.
Here is a screenshot of what the setup screen looks like for each of the tables. I have omitted the other column names that I'm syncing to make the SELECT statement more readable. The other columns are primarily varchar or int types.
Problem
For unrelated reasons, we must now sync a new table. Like most of the other tables, it has a primary key column named rGUID of type binary. When initially setting up the other tables, I tried to sync the primary key as a binary type to a MySQL binary column, but it failed when attempting to verify the SELECT statement on the SQLServer side with the error “Cannot remove this column, because it is a part of the constraint Constraint1 on the table SQLTable”.
Example of what I see for the the GUID/rGUID primary key values stored in the SQLTable via Access, or in MySQL after syncing as string:
¡狻➽䪏蚯㰛蓪
Ҝ諺䖷ᦶ肸邅
ब惈蠷䯧몰吲론�
ॺ䀙㚪䄔麽骧⸍薉
To get around this, I use CAST in the SQLTable SELECT statement to CAST the binary value as a string using: CAST(GUID as nchar(8)) as GUID, and then set up the MySQL column as a VARCHAR(32) using utf8_general_ci collation.
This has worked great for every other table since we originally set this up. But this additional table has considerably more records (about 120,000 versus 5,000-10,000), and though I’m able to sync 10,000 – 15,000 successfully, when I try to sync the entire table I get about 10-12 errors such as:
The metabase record 'd36d2dbe-fa89-4712-be4c-6b212367004b' is marked
to be added. The table 'SQLTable' does not contain a corresponding
row. Changes made to this metabase record will be reset to the
initial state.
I don't understand what is causing the above error or how to work past it.
What I’ve tried so far:
I’ve confirmed the SQLTable has no other unique fields that could be
used as PK in place of the rGUID column
I’ve tried use different type, length and collation settings on the
MySQL table, and have had mixed success, but ultimately still get
errors when attempting to sync the entire table.
I’ve also tried tweaking the CAST settings for the SQL SELECT
statement, but nchar(8) seems to work best for the other tables
I've tried syncing using HASHBYTES('SHA1', GUID) as GUID and syncing
the value of that, but get the below ODBC error
I was thinking perhaps I could convert the SQL GUID to its value, then sync that as a varchar (or a binary), but my attempts at using CONVERT in the SQLTable SELECT statement have failed
Settings I used for all the other tables:
SQL SELECT Statement: SELECT CAST(GUID as nchar(8)) as GUID, OtherColumns FROM SQLTable;
MYSQL SELECT Statement: SELECT GUID, OtherColumns FROM MySQLTable;
Primary Key Field: GUID
Primary Key Field Type: String
MySQL Column Type/Collation: VARCHAR(32), utf8_general_ci
Any help or suggestions at all would be great. I've been troubleshooting this in my spare time for a couple of weeks now, and have no had much success. I'm not particularly familiar with the binary type, and am hoping someone might have an idea on how I might be able to successfully sync this SQL table to MySQL without these errors.
Given the small size of the datasets involved I would select as CHAR(36) from SQL Server and store in a CHAR(36) in MySQL.
If you are able to control the way the data is inserted by Layer2 Cloud Connector then you could set your MySQLTable GUID column as BINARY(16) -
SELECT CAST(GUID AS CHAR(36)) AS GUID, OtherColumns FROM SQLTable;
INSERT INTO MySQLTable (GUID) VALUES (UUID_TO_BIN(GUID)))
SELECT BIN_TO_UUID(GUID) AS GUID, OtherColumns FROM MySQLTable;

Can I do Change Data Capture with MariaDb's Automatic Data Versioning

We're using MariaDb in production and we've added a MariaDb slave so that our data team can perform some ETL tasks from this slave to our datawarehouse. However, they lack a proper Change Data Capture feature (i.e. they want to know which rows from the production table changed since yesterday in order to query rows that actually changed).
I saw that MariaDb's 10.3 had an interesting feature that allowed to perform a SELECT on an older version of a table. However, I haven't found resources that supported the idea that it could be used for CDC, any feedback on this feature?
If not, we'll probably resort to streaming the slave's binlogs to our datawarehouse but that looks challenging..
Thanks for your help!
(As a supplement to Stefans answer)
Yes, the System-Versioning can be used for CDC because the validity-period in ROW_START (Object starts to be valid) and ROW_END (Object is now invalid) can be interpreted when an INSERT-, UPDATE- or DELETE-query happened. But it's more cumbersome as with alternative CDC-variants.
INSERT:
Object was found for the first time
ROW_START is the insertion time
UPDATE:
Object wasn't found for the first time
ROW_START is the update time
DELETE:
ROW_END lies in the past
there is no new entry for this object in the next few lines
I'll add a picture to clarify this.
You can see that this versioning is space saving because you can combine the information about INSERT and DELETE of an object in one line, but to check for DELETEs is costly.
In the example above I used a Table with a clear Primary Key. So a check for the-same-object is easy: just look at the id. If you want to capture changes in talbes with an key-combination this can also make the whole process more annoying.
Edit: another point is that the protocol-Data is kept in the same table as the "real" data. Maybe this is faster for an INSERT than known alternativ solution like the tracking per TRIGGER (like here), but if changes are made quite frequent on the table and you want to process/analyse the CDC-Data this can cause performance problems.
MariaDB supports System-Versioned Tables since version 10.3.4. System version tables are specified in the SQL:2011 standard. They can be used for automatically capturing previous versions of rows. Those versions can then be queried to retrieve their values as they have been set at a specific point in time.
The following text and code example is from the official MariaDB documentation
With system-versioned tables, MariaDB Server tracks the points in time
when rows change. When you update a row on these tables, it creates a
new row to display as current without removing the old data. This
tracking remains transparent to the application. When querying a
system-versioned table, you can retrieve either the most current
values for every row or the historic values available at a given point
in time.
You may find this feature useful in efficiently tracking the time of
changes to continuously-monitored values that do not change
frequently, such as changes in temperature over the course of a year.
System versioning is often useful for auditing.
With adding SYSTEM VERSIONING to a newly created or an already existing table (using ALTER), the table will be expanded by row_start and row_end time stamp columns which allow retrieving the record valid within the time between the start and the end timestamps.
CREATE TABLE accounts (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(255),
amount INT
) WITH SYSTEM VERSIONING;
It is then possible to retrieve data as it was at a specific time (with SELECT * FROM accounts FOR SYSTEM_TIME AS OF '2019-06-18 11:00';), all versions within a specific time range
SELECT * FROM accounts
FOR SYSTEM_TIME
BETWEEN (NOW() - INTERVAL 1 YEAR)
AND NOW();
or all versions at once:
SELECT * FROM accounts
FOR SYSTEM_TIME ALL;

Microsoft Access Reusing Record ID's

I have a (Microsoft access office 365 database). I have a number of records in a table in the database. Lets say I have 5 records numbered 1 to 5 respectfully. now when I delete record number 3. Record number 3 is now gone. The problem I am having is when I create a new record I see that record number 3 is being used again as a record number despite the fact that record number 3 has been deleted. This to me should not be happening. I recently upgraded the Microsoft Access Database Engine redistributable 2016 on our server from 2010. Thinking this might resolve the problem. The data type and primary key for the archiveID field is AutoNumber.
Please advise.
Some times AutoNumber can be damaged, for example by manual assignment. Try to repair it:
INSERT INTO [TableName]([archiveID]) SELECT MAX([archiveID]) FROM [TableName];
And check if problem repeats.

MYSQL, Event not running

I am trying to run the MYSQL event at a particular time, and to be repeated after 24 hours:
ALTER EVENT myeventToronto1
ON SCHEDULE EVERY 24 HOUR
STARTS '2013-08-04 02:02:00'
COMMENT 'A sample comment.'
DO
INSERT into `smallworksdb`.`eventtesttable`(ID,NAME) Values(1,'someValue');
When I save the above event, the table is not getting populated at a particular time specified in the event, so I am assuming that the event does not get run.
What am I missing here.
Thanx
Why do I have this string feeling that id is declared as a primary key (or at least unique)? And you are trying to insert the same value every day. The problem would be an error generated by SQL, not that the job is not running.
Typically a field like id would be an auto_increment value that would be set automatically on insert, and not given a particular value.
This is just speculation. There are other possibilities like the scheduler is turned off or the computer reboots at exactly that time every day?

Teradata identity column and "Duplicate unique prime key error in dbname.tablename"

I created a table using the below definition for a Teradata identity column:
ID INTEGER GENERATED BY DEFAULT AS IDENTITY
(START WITH 1
INCREMENT BY 1
MINVALUE 0
MAXVALUE 100000000
NO CYCLE),
----
UNIQUE PRIMARY INDEX ( ID )
For several months, the ID column has been working properly, automatically generating a unique value for the column. Over the past month, however, ELMAH has been intermittently reporting the following exception from our .NET 4.0 ASP.NET app:
Teradata.Client.Provider.TdException: [Teradata Database] [2801] Duplicate unique prime key error in DATABASENAME.TABLENAME.
I was able to replicate it by opening SQL Assistant and inserting a bunch of records into the table with raw SQL. As expected, most of the time it would insert successfully, but other times it would throw the above exception.
It appears that this error is occuring because Teradata is trying to generate a value for this column that it has previously generated.
Does anyone have any idea how to get to the bottom of what's happening? At the very least, I'd like some way to debug the issue a bit deeper.
I would suggest changing the definition of your identity column to GENERATED ALWAYS to prevent the application or ETL process from supplying a value that could have been used. In fact, it is recommended by Teradata that if you are using your IDENTITY column as part of a UPI that it should be defined as GENERATED ALWAYS ... NO CYCLE
EDIT:
If your business requirements are such that you must be able to provide a value I would also consider using a domain that is outside the range of values you have set aside for the IDENTITY column. You can use a negative domain or a range that is an order of magnitude beyond that of the IDENTITY column. Personal preference would be to use a negative domain.