SharedObject: can receive event from other clients but never fires event after saving data - actionscript-3

I'm using a SharedObject to create a simple chat app. The SharedObject was created fine and my app could receive the sync event when other clients updates the data on the SO. However, the problem comes when my app tries to saves the data on the SO to signal other clients. I've verified that the data was changed using the following code:
trace("before:"+so.data.chatMessage);
so.data.chatMessage = msg.text;
trace("after:"+so.data.chatMessage);
It said "before:abc" and "after:def". Unfortunately no clients received the sync event after the data on the SO changed including the client that made the data change itself. So this means this client can receive other client's message but itself message never gets out.
Anybody has seen such issue before? Thanks,
Jack

You have to call flush():
If you don't use this method, Flash Player writes the shared object to a file when the shared object session ends — that is, when the SWF file is closed, when the shared object is garbage-collected because it no longer has any references to it, or when you call SharedObject.clear() or SharedObject.close().
or
use setProperty() to change the property:
Updates the value of a property in a shared object and indicates to the server that the value of the property has changed.
As you only change a property of the data object, there's no notification going on that this value has changed.
Calling so.flush() resulted in "Error: Error #2130: Unable to flush SharedObject." It did not print an internal error, though. So it seems the problem was the flush couldn't be successful... Any idea how could happen?
Take a look at this other question:
Error #2130 Unable to flush sharedObject

Related

Couchbase Sync-Gateway Multiple Clients

I'am currently playing around with the Couchbase Sync-Gateway and have built a demo app.
What is the intended behavior if a user logs in with the same username on a different device (which has an empty database) or if he deleted the local database?
I'am expecting that all the data from the server should get synced back to the clients.
Is this correct?
My problem is that if i'am deleting the database or login from a different device, nothing will get synced.
Ok i figured it out and it's exactly how i thought it would be.
If i log in from a different device i get all the data synced automatically.
My problem was the missing sync function. I thought it will use a default and route all documents to the public channel automatically.
I'am now using the following simple sync-function:
"sync": `function (doc, oldDoc) {
channel('!');
access('demo#example.com', '*');
}`
This will simply route all documents to the public channel and grant my demo-user access to it.
I think this shouldn't be used in production but it's a good starting point for playing around.
Now everything is working fine.
Edit: I've now found the missing info:
https://docs.couchbase.com/sync-gateway/current/configuration-properties.html#databases-this_db-sync
If you don't supply a sync function, Sync Gateway uses the following default sync function
...
The channels property is an array of strings that contains the names of the channels to which the document belongs. If you do not include a channels property in a document, the document does not appear in any channels.

Why my Soffid JSON REST Web Services Connector does not update an object in the target system?

I am trying to connect my Soffid 3 server with our custom web application named Schrift. I am using а JSON REST Web Services Connector for this purpose. I added REST Web service plugin and then configured an agent with JSON/XML/SOAP Rest webservice type.
Loading of objects is working fine. My REST connector connects to the web service successfully and gets data of the accounts.
The problem is when I am trying to update some data (for example, I am trying to lock an account), nothing happens. And unfortunately I don't know what should be happening. When should REST connector send updated data to the managed system and in which way? I didn't find any log entries saying that REST connector was trying to update an object on managed system. Maybe I did smth wrong or missed something.
I would appreciate for any help. I can post any conf or log details if you need.
Update#1
(I did some investigation after the first answer)
I checked the agent settings: Read only and Manual account creation are set to no
The account was set to unmanaged type, but I succeeded in changing its type to shared and then to single without getting an error. Now it is set to single
The task queue is empty.
Also I've checked that update method is present and update properties are set correctly. updateParams is not set (it means that all attributes should be sent to the managed system).
But when I change status of the account (from Enable to Disable), nothing happens.
In the console log I can see only these lines
14-Sep-2021 13:26:29.708 INFO [BPM-Scheduler:192.168.7.121:1] com.soffid.iam.bpm.job.JobExecutorThread.run No job to execute
When I manually run the task Analize impact for changes on Schrift, Execution log shows
Changes detected for accounts
=============================
NO CHANGE DETECTED
Changes detected for roles
=============================
NO CHANGE DETECTED
Update#2
After many attempts I made some progress. Now when I make some changes in the account, the task named UpdateAccount baklykov#irf.com.ua#Schrift appears, but runs with an error.
At first it was 415 Unsupported Media Type error as I wrote in comments, but now it looks a little different
Throws exception updating object : Extensible object [type = account]
EmployeeEmail: baklykov#irf.com.ua
IsLockedOut: true (log truncated) ...
caused by Unexpected response, Content-Type: null
Update#3
I found out that soffid's request for updating the object was in improper format (all the parameters were passed in the html request instead of putting them in json body)
After researching I found a method's property called Encoding and set it to application/json value.
Now the parameters are passed in json body (that's what I need), but now the problem is that soffid puts all the parameters in json body, including the key parameter by which the object for updating should be determined. My guess this is the reason why the object in the target system is still not updated.
In other words my application expects a request like this:
https://myapp.mysite.com/api/v1/Soffid/Employees?EmployeeEmail=baklykov%40irf.com.ua :
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan"}
but Soffid sends this:
https://myapp.mysite.com/api/v1/Soffid/Employees:
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan","EmployeeEmail":"baklykov#irf.com.ua"}
The system should have created a UpdateAccount task in the task queue. Please, verify:
The task engine is in automatic mode. In read-only or manual mode, no task will be created.
If you are updating an account, check the account is not set as unmanaged. In that case, no tasks is created.
Finally, verify the task queue has not held the task up.
Have you checked the engine mode? Look at Main Menu > Administration > Configure Soffid > Integration engine > Smart engine settings
It should be set to automatic.

Autodesk DM API: Is Retry appropriate here?

I've got an application that's been working for a long time.
Recently we created a new app/keys for it, and it's behaving strangely.
(I did figure out the scope requirements had been put in place. I am requesting bucket:create bucket:read data:read data:write).
When I upload a file to a bucket, I've traditionally called done the call to get the object details afterwards, to verify that it's successfully uploaded.
With the new key, I am intermittently getting this error:
GetObjectDetails: InternalServerError {"fault":{"faultstring":"Execution of ServiceCallout servicecallout-auth-acm-request failed. Reason: timeout occurred servicecallout-auth-acm-request","detail":{"errorcode":"steps.servicecallout.ExecutionFailed"}}}
Is this something I should be re-trying with a sleep in between? or is it indicative of something wrong with the upload?
(FYI - putting in a retry seems to have have resolved this for me, but I still don't know if that's the right answer - and if this issue might happen on other calls).
It could be that the service requires a slight delay between a put object and a get, so I would suggest either use a timer or a retry as you mentioned. However a successful response from the upload should be enough to ensure your object has been placed to the bucket without the need to double check.

How to control triggering of email for the same error in ssis using event handlers?

I am using ssis event handler to trigger an email whenever an error occured in the entire package(PACKAGE+ONEEROR). Here number of emails triggered is equal to number of errors generated.How can I restrict it to one mail eventhough the same error occured 10 times.
Please suggest....
You have a few options. The problem with setting an ONERROR email at the package level is that it will send an email for each error the package encounters. This gets ugly if you have a deep level transform fail, which will error as it fails back up to the package level.
I suggest that you either:
1) Setup ONERROR events at the task level and remove the package level event. Usually this will be good enough. Most tasks will only have one error to report. Be careful with Data Flows, they can act in a similar fashion as the package level events.
2) Setup some sort of advance logging. I’ve seen this done several ways. I’ve seen some people setup Script tasks to log the errors (at the task level) to a variable, and then send a final email containing the variable in the body (at control flow level). I have also seen people call stored procedures (at the task level and package level) for each error that occurs. The sproc would log errors to the DB and allow the package to continue on to the next step/container. The logged errors can then be dumped into a csv and emailed as an attachment.
If you like your current setup, you can try changing the error properties for each container/task. I haven't ever done this, but I do know you can change the way tasks handle errors! I don't like this option because you would possibly be missing errors (maybe? kind of guessing).
update From another solution - If you want to keep your current email ONERROR and simply prevent certain errors from "bubbling" up and sending emails, you can follow this link to learn how to gracefully handle errors. You could prevent certain tasks errors from reaching your ONERROR event at the package level. good luck.

URLLoader does not resond to .close()

My application requires that I am able to abort/close a URLLoader instance at any point in the post-connect stage; that is, regardless if I have connected and the file transfer has already begun, or whether I have connected, and the file transfer has yet to commence (the server has not begun sending the file yet).
Here is my code:
var myTextLoader:URLLoader = new URLLoader();
myTextLoader.load(new URLRequest("myText.txt"));
This is what I have noticed:
When I connect to a server, and the server starts sending the file immediately, DURING the actual file transfer, if I invoke myTextLoader.close(), it aborts immediately. This is expected. I can monitor this by executing the SWF in Firefox,and noticing that when I issue the close(), the network connecion goes from Pending to aborted.
However, if I connect to the server, and the file transfer has not actually begun yet (I have confirmed the connect event has fired, and the server has simply not begun sending the file), then myTextLoader.close() has no effect. Only AFTER the first bytes start being transferred from the server, will .close() have any effect. I can verify the connection is stuck in Pending in Firebug.. .close() has no effect until the transfer has started.
Any ideas how to work around this issue? I need to be able to invoke .close() and have the connection torn down regardless of the connection stage.
First thing I would think of, is create a bool "aborted" that is set to true where the close() method is invoked.
Like :
function abort():void {
_aborted = true;
myTextLoader.close();
}
Then check for its value anywhere in the onProgress event or any similar event, to actually call the URLLoader close() method again whenever its value is true...
function onProgress(evt:ProgressEvent):void {
if (_aborted) {
myTextLoader.close();
}
}
Its not a pretty thing and is an ugly workaround, but this could work, since when the first bits are actually received, you'll know if you already wanted to close it or not.
Did you find any bug report on that anywhere ?... I doubt it could be an intended behavior...
3rd party AS3 HTTPClient (https://github.com/gabriel/as3httpclient) appears to not exhibit this issue with close().