Unique index violation - sql-server-2008

I have a table with following structure:
CREATE TABLE [dbo].[Photos](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[OriginalUrl] [nvarchar](450) NOT NULL,
[ObjCode] [nvarchar](10) NOT NULL,
[ProviderCode] [int] NOT NULL,
[ImageId] [int] NOT NULL
and one of the indexes is:
CREATE UNIQUE NONCLUSTERED INDEX [IX_Photos_ObjCode_ProvCode_ImageId] ON
[dbo].[Photos]
(
[ObjCode] ASC,
[ProviderCode] ASC,
[ImageId] ASC
)
The general architecture is:
web api - responsible for handling incoming request and returning data stored in database or sending requests to queue if there is no data
60 instances of handlers which are consuming queue, processing requests and storing the data in db
I get many exceptions when a handler instance tries to insert data which shouldn't violate the uniqueness of data. For example I get following error:
An error occurred while updating the entries. See the inner exception for details. Cannot insert duplicate key row in object 'dbo.Photos' with unique index 'IX_Photos_ObjCode_ProvCode_ImageId'. The duplicate key value is (ART345, 2625, 0).
when I try to insert set of items with different parameters, for example "PKM6778,8976,0" (ObjCode, ProvCode, ImageId)
It is not possible to reproduce this bug while debugging or working with single handler instance. The logs show also that none of the sets contains any item which could violate this index
Stack: asp .net core 2.2, EF Core 2.0, MSSQL 2008

I think you have bug in your handler code, may be some variables are not thread-safe, for example class-level instead of function-level. You should inspect your code.

Related

Getting/creating relational data on the fly in MySQL/MariaDB

I'm developing a distributable application that logs event data to a MySQL database. Some of the data it logs is redundant, like who caused the event, etc. A dumb example might be user: bob, action: created, target: file123
The schema is normalized so instead of storing bob every time, I'd store user_id. The problem I have is that my app is merely a layer that other applications would send data to so I won't always have a user record before I need to log an event.
To accommodate this, I wrote a "get or create" procedure that checks if that user exists, if so returns the user_id, otherwise creates a new entry and returns the generated key. (I had tried ON DUPLICATE KEY UPDATE but it doesn't play well with auto-increment primary keys in this scenario, it kept generating a new key).
For example, I might use:
CREATE PROCEDURE getOrCreateUser(IN `username` VARCHAR(25), OUT `userId` INT(11))
BEGIN
SELECT user_id INTO `userId` FROM users WHERE username = `username`;
IF `userId` IS NULL THEN
INSERT INTO users (`username`) VALUES (`username`);
SET `userId` = LAST_INSERT_ID();
END IF;
END
Now, when INSERTing an event I can CALL getOrCreateUser(...) to ensure that user record exists.
This works but I'm wondering if this is a wise approach. Say the application batch inserts 1000 event records, this would be called 1000 times.
The only way to reduce that is to call this once and cache the username/userId key/value pair in memory.
I just feel like there are two issues with that approach:
That could become inefficient if I have 100k users.
With proper indexes maybe an in-memory Map isn't much better?
Some other problem I'm not thinking of...
I've never taken this approach and am looking for insight from more experienced MySQL devs.

MySQL 8: Create Collections via DDL

I’d like to be able to create MySQL Document Store Collections via simple SQL DDL statements rather than using the X-Protocol clients.
Is there any way to do so?
Edit: I’ll try and clarify the question.
Collections are tables using JSON datatypes and functions. That much is clear.
I would like know how I can create a Collection without using the X-Protocol calls and make sure that the aforementioned collection is picked up as an actual Collection.
Judging from MySQL workbench, collection tables have a _id blob PK with an expression, a doc JSON column and a few other elements I do not recall at the moment (might be indexes, etc).
I have no means to tell via the Workbench whatever additional schema/metadata information is required for a table to be considered a Document Store Collection, or if the mere presence of an _id and doc columns are enough.
I hope this clears things up.
All "x-api" instructions are directly mapped to sql syntax. When you e.g. run db.createCollection('my_collection'), MySQL will literally just execute
CREATE TABLE `my_collection` (
`doc` json DEFAULT NULL,
`_id` varbinary(32) GENERATED ALWAYS AS
(json_unquote(json_extract(`doc`,_utf8mb4'$._id'))) STORED NOT NULL,
`_json_schema` json GENERATED ALWAYS AS (_utf8mb4'{"type":"object"}') VIRTUAL,
PRIMARY KEY (`_id`),
CONSTRAINT `$val_strict` CHECK (json_schema_valid(`_json_schema`,`doc`))
NOT ENFORCED
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
You can run the corresponding sql statements yourself if you follow that format.
The doc and _id (with their type and the given expression) are required, the _json_schema is optional, the check too (and only added since MySQL 8.0.17). Since MySQL 8, no additional columns are allowed, except generated columns that use JSON_EXTRACT on doc and which are supposed to be used in an index, see below (although they don't actually have to be used in an index).
Any table that looks like that - doc and _id with their correct type/expression and no other columns except an optional _json_schema and generated JSON_EXTRACT(doc,-columns - will be found with getCollections().
To add an index, the corresponding syntax for
my_collection.createIndex("age", {fields: [{field: "$.age", type: "int"}]})
would be
ALTER TABLE `test`.`my_collection` ADD COLUMN `$ix_i_somename` int
GENERATED ALWAYS AS (JSON_EXTRACT(doc, '$.age')) VIRTUAL,
ADD INDEX `age` (`$ix_i_somename`)
Obviously,
db.dropCollection('my_collection')
simply translates to
DROP TABLE `my_collection`
Similarly, all CRUD operations on documents have a corresponding sql DML syntax (that will actually be executed when you use them via x-api).

Geoserver null return after a wfs transaction

I was wondering if there is a way to change which column of a table is returned by geoserver after a insert transaction. I have a layer street_segment that has as a primary key the column FeatId (autoincrement). In the geoserver demo requests whenever i perform a insert transaction, geoserver responds with like in the screenshot. The record however is created in the postgres database with only the primary key FeatId inserted and all other fields are null (eventhough they have default values). Is there any way to configure the geoserver WFS service to return FeatId instead of fid when a transaction is executed ? The fid column is not present in the database. I have tried to remove and re-add the layer and republish it but still the same result.

Rails: ActiveRecord::UnknownPrimaryKey exception

A ActiveRecord::UnknownPrimaryKey occurred in survey_response#create:
Unknown primary key for table question_responses in model QuestionResponse.
activerecord (3.2.8) lib/active_record/reflection.rb:366:in `primary_key'
Our application has been raising these exceptions and we do not know what is causing them. The exception happens in both production and test environments, but it is not reproducible in either. It seems to have some relation to server load, but even in times of peak loads some of the requests still complete successfully. The app (both production and test environments) is Rails 3.2.8, ruby 1.9.3-p194 using MySQL with the mysql2 gem. Production is Ubuntu and dev/test is OS X. The app is running under Phusion Passenger in production.
Here is a sample stack trace: https://gist.github.com/4068400
Here are the two models in question, the controller and the output of "desc question_responses;": https://gist.github.com/4b3667a6896b60383dc3
It most definitely has a primary key, which is a standard rails 'id' column.
Restarting the app server temporarily stops the exceptions from occurring, otherwise they happen over a period of time 30 minutes - 6 hours in length, starting as suddenly as they stop.
It always occurs on the same controller action, table and model.
Has anyone else run into this exception?
FWIW, I was getting this same intermittent error and after a heck of a lot of head-scratching I found the cause.
We have separate DBs per client, and some how one of the client's DBs had a missing primary key on the users table. This meant that when that client accessed our site, Rails updated it's in-memory schema to that of the database it had connected to, with the missing primary key. Any future requests served by that Passenger app process (or any others that had been 'infected' by this client) which tried to access the users table borked out with the primary key error, regardless of whether that particular database had a primary key.
In the end a fairly self-explanatory error, but difficult to pin down when you've got 500+ databases and only one of them was causing the problem, and it was intermittent.
Got this problem because my workers used shared connection to database. But I was on unicorn.
I know that Passenger reconnects by default, but maybe you have some complicated logic. Connections to number of databases, for example. So you need to reconnect all connections.
This same thing happened to me. I had a composite primary key in one of my table definitions which caused the error. It was further compounded because annotate models did not (but will shortly / does now) support annotation of composite primary keys.
My solution was to make the id column the only primary key and add a constraint (not shown) for the composition. To do this you need to drop auto_increment on id if this is set, drop your composite primary key, then re-add both the primary status and autoincrement.
ALTER TABLE indices MODIFY id INT NOT NULL;
ALTER TABLE indices DROP PRIMARY KEY;
ALTER TABLE indices MODIFY id INT NOT NULL PRIMARY KEY AUTO_INCREMENT;
on postgres database
ALTER TABLE indices ALTER COLUMN id SET DATA TYPE INT;
ALTER TABLE indices ADD PRIMARY KEY (id)

Unique, Non-Required Column in SQL Server 2008

Using SQL Server 2008, I have a requirement that email addresses in my user table must be unique, but email addresses are not required in my app. I've tried to make email addresses required, but I just cannot get the customer to budge on this one. What is the best method to create this constraint? If I add a constraint, what does the constraint look like? If I add a before insert trigger, what does that look like?
This application will be running on a multi-server web farm, and there are multiple points of entry. An application-level lock() { } is not a great option. I'm doing a select and verifying that the email doesn't exist from code right before performing the insert operation to give immediate feedback to the user. However, I would really like a database-level solution here. Just in case.
I'm using an ORM (NHibernate with AutoMapping), so I'd rather not insert or update records with a stored procedure.
Use an unique filtered index:
create table Foo (
Id int not null identity(1,1) primary key
, Name varchar(256) null
, Address varchar(max) null
, Email varchar(256) null);
create index ndxFooEmail unique on Foo(Email)
where Email is not null;
This is a sure-shot 100% bullet proof way to guarantee uniqueness of an optional value. The uniqueness will be enforced in the database server, your ORM/DAL does not need to worry with it, simply handle the error if the unique constraint is violated rather than try to duplicate the constraint in the ORM/DAL (is not really possible to do it correctly under concurrency).
See Filtered Index Design Guidelines for more details about SQL Server 2008 filtered indexes.