I am new to Agile PLM,
I am getting error like Unable to extract {0}, see the log for details message in ATO's.
Can anyone help me to resolve this and root cause for this issue?
Log to Javaclient and try to do Destination reset.
Please check for space on your managed servers. Generally when it is more than 80% occupancy on server this error comes.
Third approach you can do is :
Disable all the Subscribers.
Test destination if it works or not.
4th Approach :
If you are having a clustered environment. You can check ACS configurations :
if ACS.Skips = True in all the server then it's not ideal scenario. It should be true only on one.
Apart from the above suggestions, if your issue is still not resolved, you can try to check if any exception is occurring during extract. I hope, if you are using any clustered environment, ACS is enabled in only one managed server. So expecting that, check out the STDOUT log of the particular managed server (if using WLS) where the ACS is enabled. If the WLS managed servers are installed as Windows Service you need to edit the registry entries for each of them & modify the server start up parameters for each of them to set ACS.Skipserver=true in all but one.
Now, to print the logs, you need to log in to web client, go to tools & settings -> Administration --> Logging configuration --> Set the following entries to DEBUG:
com.agile.acs.PCExtractTask
com.agile.acs.ScheduleMaster
com.agile.acs.ScheduledEventTask
com.agile.extract.server
com.agile.extract.server.ExtractService
Related
I am trying to connect my Soffid 3 server with our custom web application named Schrift. I am using а JSON REST Web Services Connector for this purpose. I added REST Web service plugin and then configured an agent with JSON/XML/SOAP Rest webservice type.
Loading of objects is working fine. My REST connector connects to the web service successfully and gets data of the accounts.
The problem is when I am trying to update some data (for example, I am trying to lock an account), nothing happens. And unfortunately I don't know what should be happening. When should REST connector send updated data to the managed system and in which way? I didn't find any log entries saying that REST connector was trying to update an object on managed system. Maybe I did smth wrong or missed something.
I would appreciate for any help. I can post any conf or log details if you need.
Update#1
(I did some investigation after the first answer)
I checked the agent settings: Read only and Manual account creation are set to no
The account was set to unmanaged type, but I succeeded in changing its type to shared and then to single without getting an error. Now it is set to single
The task queue is empty.
Also I've checked that update method is present and update properties are set correctly. updateParams is not set (it means that all attributes should be sent to the managed system).
But when I change status of the account (from Enable to Disable), nothing happens.
In the console log I can see only these lines
14-Sep-2021 13:26:29.708 INFO [BPM-Scheduler:192.168.7.121:1] com.soffid.iam.bpm.job.JobExecutorThread.run No job to execute
When I manually run the task Analize impact for changes on Schrift, Execution log shows
Changes detected for accounts
=============================
NO CHANGE DETECTED
Changes detected for roles
=============================
NO CHANGE DETECTED
Update#2
After many attempts I made some progress. Now when I make some changes in the account, the task named UpdateAccount baklykov#irf.com.ua#Schrift appears, but runs with an error.
At first it was 415 Unsupported Media Type error as I wrote in comments, but now it looks a little different
Throws exception updating object : Extensible object [type = account]
EmployeeEmail: baklykov#irf.com.ua
IsLockedOut: true (log truncated) ...
caused by Unexpected response, Content-Type: null
Update#3
I found out that soffid's request for updating the object was in improper format (all the parameters were passed in the html request instead of putting them in json body)
After researching I found a method's property called Encoding and set it to application/json value.
Now the parameters are passed in json body (that's what I need), but now the problem is that soffid puts all the parameters in json body, including the key parameter by which the object for updating should be determined. My guess this is the reason why the object in the target system is still not updated.
In other words my application expects a request like this:
https://myapp.mysite.com/api/v1/Soffid/Employees?EmployeeEmail=baklykov%40irf.com.ua :
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan"}
but Soffid sends this:
https://myapp.mysite.com/api/v1/Soffid/Employees:
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan","EmployeeEmail":"baklykov#irf.com.ua"}
The system should have created a UpdateAccount task in the task queue. Please, verify:
The task engine is in automatic mode. In read-only or manual mode, no task will be created.
If you are updating an account, check the account is not set as unmanaged. In that case, no tasks is created.
Finally, verify the task queue has not held the task up.
Have you checked the engine mode? Look at Main Menu > Administration > Configure Soffid > Integration engine > Smart engine settings
It should be set to automatic.
When I try to create a MySQL database on Microsoft Azure using pure REST request (PUT) to:
https://management.azure.com/subscriptions/<subscriptionid>/resourceGroups
/resource-<id>/providers/successbricks.cleardb/databases/<my-database>?
api-version=2014-04-01
I am getting this error:
HTTP STATUS CODE 400 Bad Request
Error message: 'Legal terms have not been accepted for this item on
this subscription. To accept legal terms, please go to the Azure
portal (http://go.microsoft.com/fwlink/?LinkId=534873) and configure
programmatic deployment for the Marketplace item or create it there
for the first time'
So I went to Microsoft Azure Portal, and I accepted the legal terms. I tried again, same error. I searched in almost the entire Azure Portal for some configuration about this and I found nothing.
Someone have the same problem?
Thanks.
you should not only accept the terms but follow the procedure of making the programmatic access possible. It should be on the license page.
Programmatic deployment only can be found in Virtual Machines MySQL, not in Data Storage MySQL Database. Try REST request after you enabled programmatic Deployment.
In addition, I successfully created a MySQL database using REST API without reproducing your question, but note that the request body need to be sent as well when using PUT request.
OK guys, found the solution. I don't know why, but if we change the JSON attribute { "plan.name": "Pay-As-You-Go" } to { "plan.name": "Free" } the database is created successfully.
I opened a support ticket to know which are the MySQL available plans. I will update the answer as soon as possible.
In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a different approach to developing?
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.
I have configured 2 AppFabric instances and try to connect from a test client to the cache.
At first, I had trouble establishing the cache using the DataCacheFactory, but after opening the 22233-22235 ports in the firewall I have managed to get the cache using the DataCacheFactory.
As soon as I try to use the cache for a very small object (using a simple get), I get the following with a null InnerException:
ErrorCode:SubStatus:The connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown.
I don't believe it's the MaxBufferSize issue (I also modified the transportProperties in the config just to make sure), but on the other hand - I'm able to get the cache, which I believe should indicate that the client can communicate with the server. So what is it? -How can I get more details on this issue?
Thanks in advance,
Nir.
Got this to work!
All I needed to do was just to add the host names, as appear in ClusterConfig file to the hosts file of the client, and that's it!
Hope that helps anyone,
Nir.
I am trying to follow the instructions for the Windows Azure Mysql PHP solution Accelerator ( http://code.msdn.microsoft.com/winazuremysqlphp ) and I get the following error in the Fabric UI and Mysql doesn't start. http://www.pastie.org/1297146
Looks like your diagnostics connection string is bogus... check to make sure it's pointing to a valid account and that the key is correct. Or just disable diagnostics by commenting out DiagnosticMonitor.Start(...) (should be in WorkerRole.cs).