Why my Soffid JSON REST Web Services Connector does not update an object in the target system? - json

I am trying to connect my Soffid 3 server with our custom web application named Schrift. I am using а JSON REST Web Services Connector for this purpose. I added REST Web service plugin and then configured an agent with JSON/XML/SOAP Rest webservice type.
Loading of objects is working fine. My REST connector connects to the web service successfully and gets data of the accounts.
The problem is when I am trying to update some data (for example, I am trying to lock an account), nothing happens. And unfortunately I don't know what should be happening. When should REST connector send updated data to the managed system and in which way? I didn't find any log entries saying that REST connector was trying to update an object on managed system. Maybe I did smth wrong or missed something.
I would appreciate for any help. I can post any conf or log details if you need.
Update#1
(I did some investigation after the first answer)
I checked the agent settings: Read only and Manual account creation are set to no
The account was set to unmanaged type, but I succeeded in changing its type to shared and then to single without getting an error. Now it is set to single
The task queue is empty.
Also I've checked that update method is present and update properties are set correctly. updateParams is not set (it means that all attributes should be sent to the managed system).
But when I change status of the account (from Enable to Disable), nothing happens.
In the console log I can see only these lines
14-Sep-2021 13:26:29.708 INFO [BPM-Scheduler:192.168.7.121:1] com.soffid.iam.bpm.job.JobExecutorThread.run No job to execute
When I manually run the task Analize impact for changes on Schrift, Execution log shows
Changes detected for accounts
=============================
NO CHANGE DETECTED
Changes detected for roles
=============================
NO CHANGE DETECTED
Update#2
After many attempts I made some progress. Now when I make some changes in the account, the task named UpdateAccount baklykov#irf.com.ua#Schrift appears, but runs with an error.
At first it was 415 Unsupported Media Type error as I wrote in comments, but now it looks a little different
Throws exception updating object : Extensible object [type = account]
EmployeeEmail: baklykov#irf.com.ua
IsLockedOut: true (log truncated) ...
caused by Unexpected response, Content-Type: null
Update#3
I found out that soffid's request for updating the object was in improper format (all the parameters were passed in the html request instead of putting them in json body)
After researching I found a method's property called Encoding and set it to application/json value.
Now the parameters are passed in json body (that's what I need), but now the problem is that soffid puts all the parameters in json body, including the key parameter by which the object for updating should be determined. My guess this is the reason why the object in the target system is still not updated.
In other words my application expects a request like this:
https://myapp.mysite.com/api/v1/Soffid/Employees?EmployeeEmail=baklykov%40irf.com.ua :
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan"}
but Soffid sends this:
https://myapp.mysite.com/api/v1/Soffid/Employees:
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan","EmployeeEmail":"baklykov#irf.com.ua"}

The system should have created a UpdateAccount task in the task queue. Please, verify:
The task engine is in automatic mode. In read-only or manual mode, no task will be created.
If you are updating an account, check the account is not set as unmanaged. In that case, no tasks is created.
Finally, verify the task queue has not held the task up.

Have you checked the engine mode? Look at Main Menu > Administration > Configure Soffid > Integration engine > Smart engine settings
It should be set to automatic.

Related

How to add a new app setting to Azure Web App using pulumi without removing the existing settings?

I'm using pulumi azure native for infrastructure as code. I need to create an Azure Web App (based on an App Service Plan) and add some app settings (and connection strings) throughout the code, e.g., Application Insights instrumentation key, Blob Storage account name, etc.
I figured out that there is a method, WebAppApplicationSettings, that can update web app settings:
from pulumi_azure_native import web
web_app = web.WebApp(
'my-web-app-test123',
...
)
web.WebAppApplicationSettings(
'myappsetting',
name=web_app.name,
resource_group='my-resource-group',
properties={'mySetting': 123456},
opts=ResourceOptions(depends_on=[web_app])
)
It turns out that WebAppApplicationSettings replaces the entire app settings with the value given in the properties parameter, which is not what I need. I need to append a new setting to the existing settings.
So, I tried this:
Fetch the existing settings from web app using list_web_app_application_settings_output
Add the new settings the existing settings
Update the app settings using WebAppApplicationSettings
from pulumi_azure_native import web
app = web.WebApp(
'my-web-app-test123',
...
)
current_apps_settings = web.list_web_app_application_settings_output(
name=web_app.name,
resource_group_name='my-resource-group',
opts=ResourceOptions(depends_on=[web_app])
).properties
my_new_setting = {'mySetting': 123456}
new_app_settings = Output.all(current=current_apps_settings).apply(
lambda args: my_new_setting.update(args['current'])
)
web.WebAppApplicationSettings(
'myappsetting',
name=app.name,
resource_group='my-resource-group',
properties=new_app_settings,
opts=ResourceOptions(depends_on=[web_app])
)
However, this doesn't work either and throws the following error during pulumi up:
Exception: invoke of azure-native:web:listWebAppApplicationSettings failed: invocation of azure-native:web:listWebAppApplicationSettings returned an error: request failed /subscriptions/--------------/reso
urceGroups/pulumi-temp2/providers/Microsoft.Web/sites/my-web-app-test123/config/appsettings/list: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Web/sites/my-web-app-test123' under resource group 'pulumi-temp2' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"
error: an unhandled error occurred: Program exited with non-zero exit code: 1
Is there way that I can add a new app setting to Azure Web App using pulumi without changing/removing the existing settings?
Here's a suboptimal workaround: App Configuration and Enable Azure Function Dynamic Configuration.
And as far as I can tell it comes with some drawbacks:
cold start time may increase
additional costs
care must be taken to avoid redundant calls (costly)
additional boilerplate code needed for every function app
Maybe there's a better way, I mean I hope there is, I just haven't found it yet either.
After some searching and reaching out to pulumi-azure-native people, I found an answer:
Azure REST API doesn't currently support this feature, i.e., updating a single Web App setting apart from the others. So, there isn't such a feature in pulumi-azure-native as well.
As a workaround, I stored (kept) all the app settings I needed to be added, updated, or removed in a dictionary throughout my Python script, and then I passed them to the web.WebAppApplicationSettings class at the end of the script so that they will be applied all at once to the Web App resource. This is how I solved my problem.

Couchbase Sync-Gateway Multiple Clients

I'am currently playing around with the Couchbase Sync-Gateway and have built a demo app.
What is the intended behavior if a user logs in with the same username on a different device (which has an empty database) or if he deleted the local database?
I'am expecting that all the data from the server should get synced back to the clients.
Is this correct?
My problem is that if i'am deleting the database or login from a different device, nothing will get synced.
Ok i figured it out and it's exactly how i thought it would be.
If i log in from a different device i get all the data synced automatically.
My problem was the missing sync function. I thought it will use a default and route all documents to the public channel automatically.
I'am now using the following simple sync-function:
"sync": `function (doc, oldDoc) {
channel('!');
access('demo#example.com', '*');
}`
This will simply route all documents to the public channel and grant my demo-user access to it.
I think this shouldn't be used in production but it's a good starting point for playing around.
Now everything is working fine.
Edit: I've now found the missing info:
https://docs.couchbase.com/sync-gateway/current/configuration-properties.html#databases-this_db-sync
If you don't supply a sync function, Sync Gateway uses the following default sync function
...
The channels property is an array of strings that contains the names of the channels to which the document belongs. If you do not include a channels property in a document, the document does not appear in any channels.

Data Studio Connector can't get access token for BigQuery Service Account: Access not granted or expired

I'm trying to make a community connector to connect my database in BigQuery to data studio with the service account that I hooked up as the Owner/DataViewer/JobUser of the BigQuery project. I know that the service account works when connecting to BigQuery because I've tested it elsewhere. I copied from the connector code from this tutorial (https://developers.google.com/datastudio/solution/blocks/using-service-accounts) almost exactly, replacing the SQL string with my query and adding some different query parameters. I also stored the service account's credentials in my script properties by pasting the json object and storing it like:
var service_account_creds_obj = {
"type": "service_account",
"project_id": ...
...
}
scriptProperties.setProperty('SERVICE_ACCOUNT_CREDS', JSON.stringify(service_account_creds_obj));
However, I always get stuck in the flow when my getData function calls getOauthService().getAccessToken(), which doesn't ever successfully return. When I create a report using the connector, I get this error: "Access not granted or expired." I can't find the documentation for getAccessToken and I'm having trouble understanding why it won't terminate. I can see that it doesn't return because a console.log immediately before that line displays but it never gets to the log on the next line. Then my try-catch block catches the error that I'm seeing. Note that my getOauthService function is exactly the same as the one from the documentation/tutorial example, except that I've played around with the input text in the call to createService. That input text shouldn't really matter though right?
Please, I've been trying to debug this for hours, but the documentation on this is pretty horrible, and it's really hard to debug since the flow of the code is handled in the background and stackdriver logging is really buggy.
I figured out my problem. The documentation posted above said to set the OAuth2 scope to https://www.googleapis.com/auth/bigquery.readonly. However, I naively included
"oauthScopes": ["https://www.googleapis.com/auth/bigquery.readonly"]
in my manifest file. Meanwhile, the code I copied over from the documentation already included this line:
.setScope(['https://www.googleapis.com/auth/bigquery.readonly']);
So I'm not sure exactly why this caused a problem. But it must have prevented the OAuth2.createService function from properly getting set.

SharedObject: can receive event from other clients but never fires event after saving data

I'm using a SharedObject to create a simple chat app. The SharedObject was created fine and my app could receive the sync event when other clients updates the data on the SO. However, the problem comes when my app tries to saves the data on the SO to signal other clients. I've verified that the data was changed using the following code:
trace("before:"+so.data.chatMessage);
so.data.chatMessage = msg.text;
trace("after:"+so.data.chatMessage);
It said "before:abc" and "after:def". Unfortunately no clients received the sync event after the data on the SO changed including the client that made the data change itself. So this means this client can receive other client's message but itself message never gets out.
Anybody has seen such issue before? Thanks,
Jack
You have to call flush():
If you don't use this method, Flash Player writes the shared object to a file when the shared object session ends — that is, when the SWF file is closed, when the shared object is garbage-collected because it no longer has any references to it, or when you call SharedObject.clear() or SharedObject.close().
or
use setProperty() to change the property:
Updates the value of a property in a shared object and indicates to the server that the value of the property has changed.
As you only change a property of the data object, there's no notification going on that this value has changed.
Calling so.flush() resulted in "Error: Error #2130: Unable to flush SharedObject." It did not print an internal error, though. So it seems the problem was the flush couldn't be successful... Any idea how could happen?
Take a look at this other question:
Error #2130 Unable to flush sharedObject

Fiware CEP server stops responding

In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a different approach to developing?
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.