Geoserver null return after a wfs transaction - gis

I was wondering if there is a way to change which column of a table is returned by geoserver after a insert transaction. I have a layer street_segment that has as a primary key the column FeatId (autoincrement). In the geoserver demo requests whenever i perform a insert transaction, geoserver responds with like in the screenshot. The record however is created in the postgres database with only the primary key FeatId inserted and all other fields are null (eventhough they have default values). Is there any way to configure the geoserver WFS service to return FeatId instead of fid when a transaction is executed ? The fid column is not present in the database. I have tried to remove and re-add the layer and republish it but still the same result.

Related

Getting/creating relational data on the fly in MySQL/MariaDB

I'm developing a distributable application that logs event data to a MySQL database. Some of the data it logs is redundant, like who caused the event, etc. A dumb example might be user: bob, action: created, target: file123
The schema is normalized so instead of storing bob every time, I'd store user_id. The problem I have is that my app is merely a layer that other applications would send data to so I won't always have a user record before I need to log an event.
To accommodate this, I wrote a "get or create" procedure that checks if that user exists, if so returns the user_id, otherwise creates a new entry and returns the generated key. (I had tried ON DUPLICATE KEY UPDATE but it doesn't play well with auto-increment primary keys in this scenario, it kept generating a new key).
For example, I might use:
CREATE PROCEDURE getOrCreateUser(IN `username` VARCHAR(25), OUT `userId` INT(11))
BEGIN
SELECT user_id INTO `userId` FROM users WHERE username = `username`;
IF `userId` IS NULL THEN
INSERT INTO users (`username`) VALUES (`username`);
SET `userId` = LAST_INSERT_ID();
END IF;
END
Now, when INSERTing an event I can CALL getOrCreateUser(...) to ensure that user record exists.
This works but I'm wondering if this is a wise approach. Say the application batch inserts 1000 event records, this would be called 1000 times.
The only way to reduce that is to call this once and cache the username/userId key/value pair in memory.
I just feel like there are two issues with that approach:
That could become inefficient if I have 100k users.
With proper indexes maybe an in-memory Map isn't much better?
Some other problem I'm not thinking of...
I've never taken this approach and am looking for insight from more experienced MySQL devs.

Getting id generated in a trigger for further requests

I have a table with two columns:
caseId, referring to a foreign table column
caseEventId, int, unique for a given caseId, which I want to auto-increment for the same caseId.
I know that the auto-increment option based on another column is not available in mySql with InnoDb:
MySQL Auto Increment Based on Foreign Key
MySQL second auto increment field based on foreign key
So I generate caseEventId into a trigger. My table:
CREATE TABLE IF NOT EXISTS mydb.caseEvent (
`caseId` CHAR(20) NOT NULL,
`caseEventId` INT NOT NULL DEFAULT 0,
PRIMARY KEY (`caseId`, `caseEventId`),
# Foreign key definition, not important here.
ENGINE = InnoDB;
And my trigger:
CREATE DEFINER=`root`#`%` TRIGGER `mydb`.`caseEvent_BEFORE_INSERT` BEFORE INSERT ON `caseEvent` FOR EACH ROW
BEGIN
SELECT COALESCE((SELECT MAX(caseEventId) + 1 FROM caseEvent WHERE caseId = NEW.caseId),0)
INTO #newCaseEventId;
SET NEW.`caseEventId` = #newCaseEventId;
END
With this, I get my caseEventId which auto-increments.
However I need to re-use this new caseEventId in further calls within my INSERT transaction, so I place this id into #newCaseEventId within the trigger, and use it in following instructions:
START TRANSACTION;
INSERT INTO mydb.caseEvent (caseId) VALUES ('fziNw6muQ20VGYwYPW1b');
SELECT #newCaseEventId;
# Do stuff based on #newCaseEventId
COMMIT;
This seems to work just fine but... what about concurrency, using connection pools etc...?
Is this #newCaseEventId variable going to be shared with all clients using the same connection, can I run into problems when my client server launches two concurrent transactions? This is using mysql under nodejs.
Is this safe, or is there a safer way to go about this? Thanks.
Edit 2020/09/24
FYI I have dropped this approach altogether. I was trying to use the db in a way it isn't meant to be used.
Basically I have dropped caseEventId, and any index which is supposed to increment nicely based on a given column value.
I rely instead on properly written queries on the read side, when I retrieve data, to recreate my caseEventId field...
That is no problem, the user defined variables a per client.
That means every user has its own use defined varoables
User-defined variables are session specific. A user variable defined by one client cannot be seen or used by other clients. (Exception: A user with access to the Performance Schema user_variables_by_thread table can see all user variables for all sessions.) All variables for a given client session are automatically freed when that client exits.
see manul

MySQL export/import empty/null multilinestring

Using phpMyadmin
I have exported a database which has a table with a spatial multilinestring field. Some of the records in this table have a null entry in this field.
When trying to import, the table fails the import with this message.
1416 - Cannot get geometry object from data you send to the GEOMETRY field
What do I need to replace the null value with for records where this field is not required?
I finally worked this out so posting back in case anyone else struggles with this.
It was not to do with null entries as I first thought. In phpMyAdmin, the insert scripts batch the inserts together. if you select the option 'include column names in every INSERT statement', each record will have it's own insert statement.
Then when you import, it will show you which record is failing. It was not failing on a null entry.
I then ran a query on the failing record like this to view the points in the multilinestring
SELECT ST_ASTEXT(Polylines) FROM myTable WHERE id = 100
This showed me that there was only 1 point in one of the Polylines. This was the issue. Importing a polyline with only 1 point into MySQL fails - which makes sense (but it did allow the record to be inserted in the first place). In my case, I set the single point polylines to null to fix this issue.
I was exporting and importing from the same versions of mySQL : 5.7.26

Unique index violation

I have a table with following structure:
CREATE TABLE [dbo].[Photos](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[OriginalUrl] [nvarchar](450) NOT NULL,
[ObjCode] [nvarchar](10) NOT NULL,
[ProviderCode] [int] NOT NULL,
[ImageId] [int] NOT NULL
and one of the indexes is:
CREATE UNIQUE NONCLUSTERED INDEX [IX_Photos_ObjCode_ProvCode_ImageId] ON
[dbo].[Photos]
(
[ObjCode] ASC,
[ProviderCode] ASC,
[ImageId] ASC
)
The general architecture is:
web api - responsible for handling incoming request and returning data stored in database or sending requests to queue if there is no data
60 instances of handlers which are consuming queue, processing requests and storing the data in db
I get many exceptions when a handler instance tries to insert data which shouldn't violate the uniqueness of data. For example I get following error:
An error occurred while updating the entries. See the inner exception for details. Cannot insert duplicate key row in object 'dbo.Photos' with unique index 'IX_Photos_ObjCode_ProvCode_ImageId'. The duplicate key value is (ART345, 2625, 0).
when I try to insert set of items with different parameters, for example "PKM6778,8976,0" (ObjCode, ProvCode, ImageId)
It is not possible to reproduce this bug while debugging or working with single handler instance. The logs show also that none of the sets contains any item which could violate this index
Stack: asp .net core 2.2, EF Core 2.0, MSSQL 2008
I think you have bug in your handler code, may be some variables are not thread-safe, for example class-level instead of function-level. You should inspect your code.

Syncing a table records with a Service response frequently

I am requesting data from a service whose response in stored in a database.First, I have an empty table, whenever I make my very first request the records from the service comes to my database table.
from now, whenever I make second request, the service will provide me some records which may be same as my first response, may be new records, may be updated records etc.
my query is to how to update my table with respect to the responses coming from the service during my second request on-wards? so that Unchanged records will remain same, New records will be added, updated records will be updated.Do I need to write any stored procedure on my DB or any workaround ?what might be the scenario if I use Nomysql DB's like mongo DB ?
Thanks In Advance.
Locate whichever subset of the service response identifies a record; this should be set as the primary key in your database table, or else a secondary key over which a uniqueness constraint is enforced.
In MySQL, use INSERT ... ON DUPLICATE KEY UPDATE.