When Creating a Shipment via API in Acumatica, I receive the error
Another process has updated 'SOOrder' record. Your changes will be lost
The way we create the Shipment is as follows:
Add the sales order to the shipment.
Save.
Clear the Screen. oScreen.SO302000Clear();
Load the created Shipment.
Add Shipment details such as Bin Locations, ship qtys, Batch/Serials etc.
Save.
The issue happens if the user loads the created shipment in Acumatica right after it has been created. Then, when the API attempts to post the shipment details (Steps 4 to 6), the API throws the error
PX.Data.PXLockViolationException: Error #90: Another process has updated 'SOOrder' record. Your changes will be lost.
Is there anyway we can avoid that Lock Violation Exception when editing a Shipment that is currently opened in the UI?
Saving a shipment triggers a long-running, asynchronous operation. You need to wait for this process to complete before you do anything else, by calling GetProcessStatus() and retrying until it is completed. Otherwise, you'll run into concurrency issues with your second update call.
Related
I am trying to connect my Soffid 3 server with our custom web application named Schrift. I am using а JSON REST Web Services Connector for this purpose. I added REST Web service plugin and then configured an agent with JSON/XML/SOAP Rest webservice type.
Loading of objects is working fine. My REST connector connects to the web service successfully and gets data of the accounts.
The problem is when I am trying to update some data (for example, I am trying to lock an account), nothing happens. And unfortunately I don't know what should be happening. When should REST connector send updated data to the managed system and in which way? I didn't find any log entries saying that REST connector was trying to update an object on managed system. Maybe I did smth wrong or missed something.
I would appreciate for any help. I can post any conf or log details if you need.
Update#1
(I did some investigation after the first answer)
I checked the agent settings: Read only and Manual account creation are set to no
The account was set to unmanaged type, but I succeeded in changing its type to shared and then to single without getting an error. Now it is set to single
The task queue is empty.
Also I've checked that update method is present and update properties are set correctly. updateParams is not set (it means that all attributes should be sent to the managed system).
But when I change status of the account (from Enable to Disable), nothing happens.
In the console log I can see only these lines
14-Sep-2021 13:26:29.708 INFO [BPM-Scheduler:192.168.7.121:1] com.soffid.iam.bpm.job.JobExecutorThread.run No job to execute
When I manually run the task Analize impact for changes on Schrift, Execution log shows
Changes detected for accounts
=============================
NO CHANGE DETECTED
Changes detected for roles
=============================
NO CHANGE DETECTED
Update#2
After many attempts I made some progress. Now when I make some changes in the account, the task named UpdateAccount baklykov#irf.com.ua#Schrift appears, but runs with an error.
At first it was 415 Unsupported Media Type error as I wrote in comments, but now it looks a little different
Throws exception updating object : Extensible object [type = account]
EmployeeEmail: baklykov#irf.com.ua
IsLockedOut: true (log truncated) ...
caused by Unexpected response, Content-Type: null
Update#3
I found out that soffid's request for updating the object was in improper format (all the parameters were passed in the html request instead of putting them in json body)
After researching I found a method's property called Encoding and set it to application/json value.
Now the parameters are passed in json body (that's what I need), but now the problem is that soffid puts all the parameters in json body, including the key parameter by which the object for updating should be determined. My guess this is the reason why the object in the target system is still not updated.
In other words my application expects a request like this:
https://myapp.mysite.com/api/v1/Soffid/Employees?EmployeeEmail=baklykov%40irf.com.ua :
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan"}
but Soffid sends this:
https://myapp.mysite.com/api/v1/Soffid/Employees:
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan","EmployeeEmail":"baklykov#irf.com.ua"}
The system should have created a UpdateAccount task in the task queue. Please, verify:
The task engine is in automatic mode. In read-only or manual mode, no task will be created.
If you are updating an account, check the account is not set as unmanaged. In that case, no tasks is created.
Finally, verify the task queue has not held the task up.
Have you checked the engine mode? Look at Main Menu > Administration > Configure Soffid > Integration engine > Smart engine settings
It should be set to automatic.
I'm new to NATS and have read all the examples for:
https://nats.io/documentation/concepts/nats-messaging/
I'm in Microservciearchitecture where in microservice-Y (MSY) need to store some information published from other microservice-X (MSX) I have 2-10 instances of MSY so when changes are made in MSX and MSX-instance publishes event I want that only 1 instance of MSY should save information so not all of them save the same data.
I have read Request-Repy:
https://nats.io/documentation/concepts/nats-req-rep/
but there seems that all of instances receives message (and will handle it) even if it is point-to-point and reply is handled just for the one instance that is quickest to reply
Is this correct or have I missunderstood example?
If I only need that 1 instance of MSY should handle given message (store data in db) what can I do to acheve this?
Use queue groups. If you have multiple subscriptions on the same subject with the same queue group, only one of the members of the group will receive the message.
Check this out: https://nats.io/documentation/concepts/nats-queueing/
I'm using a SharedObject to create a simple chat app. The SharedObject was created fine and my app could receive the sync event when other clients updates the data on the SO. However, the problem comes when my app tries to saves the data on the SO to signal other clients. I've verified that the data was changed using the following code:
trace("before:"+so.data.chatMessage);
so.data.chatMessage = msg.text;
trace("after:"+so.data.chatMessage);
It said "before:abc" and "after:def". Unfortunately no clients received the sync event after the data on the SO changed including the client that made the data change itself. So this means this client can receive other client's message but itself message never gets out.
Anybody has seen such issue before? Thanks,
Jack
You have to call flush():
If you don't use this method, Flash Player writes the shared object to a file when the shared object session ends — that is, when the SWF file is closed, when the shared object is garbage-collected because it no longer has any references to it, or when you call SharedObject.clear() or SharedObject.close().
or
use setProperty() to change the property:
Updates the value of a property in a shared object and indicates to the server that the value of the property has changed.
As you only change a property of the data object, there's no notification going on that this value has changed.
Calling so.flush() resulted in "Error: Error #2130: Unable to flush SharedObject." It did not print an internal error, though. So it seems the problem was the flush couldn't be successful... Any idea how could happen?
Take a look at this other question:
Error #2130 Unable to flush sharedObject
I have a senario that has been causing me issues for the last few weeks now. I currently have a "homepage" that populates with data from a controller with sports stats. This data comes from a service that is also used for the individuals pages for this sports stats.
For instance a user is able to click one of the listings on the home page and get a detailed list of that particualar entry with a state change from sports.com to sports.com/id/sport
I do this by taking the id from the home service and pass that through the paramenters within the state for the details page. From here I use that same service with the #id as a paramenter in order to get the details for that page (using $stateParams.id).
Normally that would work fine, but here is the problem. Sometimes when hitting the details page the service fires off the get request before the $stateParams.id is availibe and I end up with an error in my request. So instead of /json.php?detail=id im getting /json.php?detail=
For a cheap fix I now have the search query waiting on a timeout for 800ms in order to give the state time to resolve the $stateParams.id and then finally send out the request. My question is, what is a better way to do this? Is this something experienced often? It seems like in all my time with angular I haven't run into this situation so I'm a bit at a lost. Thanks
I am using ssis event handler to trigger an email whenever an error occured in the entire package(PACKAGE+ONEEROR). Here number of emails triggered is equal to number of errors generated.How can I restrict it to one mail eventhough the same error occured 10 times.
Please suggest....
You have a few options. The problem with setting an ONERROR email at the package level is that it will send an email for each error the package encounters. This gets ugly if you have a deep level transform fail, which will error as it fails back up to the package level.
I suggest that you either:
1) Setup ONERROR events at the task level and remove the package level event. Usually this will be good enough. Most tasks will only have one error to report. Be careful with Data Flows, they can act in a similar fashion as the package level events.
2) Setup some sort of advance logging. I’ve seen this done several ways. I’ve seen some people setup Script tasks to log the errors (at the task level) to a variable, and then send a final email containing the variable in the body (at control flow level). I have also seen people call stored procedures (at the task level and package level) for each error that occurs. The sproc would log errors to the DB and allow the package to continue on to the next step/container. The logged errors can then be dumped into a csv and emailed as an attachment.
If you like your current setup, you can try changing the error properties for each container/task. I haven't ever done this, but I do know you can change the way tasks handle errors! I don't like this option because you would possibly be missing errors (maybe? kind of guessing).
update From another solution - If you want to keep your current email ONERROR and simply prevent certain errors from "bubbling" up and sending emails, you can follow this link to learn how to gracefully handle errors. You could prevent certain tasks errors from reaching your ONERROR event at the package level. good luck.