mysql time out with cakephp acl create utility - mysql

The app I'm working with is well established and uses a pretty standard CakePHP ACL set up with the usual ACO/ARO tables. I usually use the cakephp console (./cake acl ....) to add acos when I create a new action in a controller, and as recently as last week I was doing this just fine for a new controller I'd created.
This morning however when running the console utility I get a very lengthy delay and eventually get (my company DB URL removed):
./cake acl create aco Statistics data
Warning: mysql_connect(): [2002] Operation timed out (trying to
connect via tcp://[db url]:3306) in
/dev/docs/cake/libs/model/datasources/dbo/dbo_mysql.php on line 552
Warning: mysql_connect(): Operation timed out in
/dev/docs/cake/libs/model/datasources/dbo/dbo_mysql.php on line 552
Warning: mysql_select_db() expects parameter 2 to be resource, boolean
given in /dev/docs/cake/libs/model/datasources/dbo/dbo_mysql.php on
line 558
Warning: mysql_get_server_info() expects parameter 1 to be resource,
boolean given in
/dev/docs/cake/libs/model/datasources/dbo/dbo_mysql.php on line 566
Warning: mysql_query() expects parameter 2 to be resource, boolean
given in /dev/docs/cake/libs/model/datasources/dbo/dbo_mysql.php on
line 600 Error: Missing database table 'aros' for model 'Aro'
There is no isse with load time when using the app from the browser, nor with using any of the other DB-intensive actions I created last week. This dev copy of the app does not use Cake's cacheing, so that's not the problem. Really just seems to be choking somewhere specifically in adding the new aco.
I'd think it were an issue with the DB server if not for the fact that it is otherwise responding fine in the app. Any ideas where in Cake's guts this sort of a misstep could occur?

For the one in a million chance that someone is in my position, remember that when working in a large instiution there are these little things called firewalls, and if you're going to run a cake console, you'd better be SSH'd on to the webserver to run your ACO commands or anything involving a connection to your database server, instead of being mapped on to a shared drive as I decided to do this week.

Related

Why my Soffid JSON REST Web Services Connector does not update an object in the target system?

I am trying to connect my Soffid 3 server with our custom web application named Schrift. I am using а JSON REST Web Services Connector for this purpose. I added REST Web service plugin and then configured an agent with JSON/XML/SOAP Rest webservice type.
Loading of objects is working fine. My REST connector connects to the web service successfully and gets data of the accounts.
The problem is when I am trying to update some data (for example, I am trying to lock an account), nothing happens. And unfortunately I don't know what should be happening. When should REST connector send updated data to the managed system and in which way? I didn't find any log entries saying that REST connector was trying to update an object on managed system. Maybe I did smth wrong or missed something.
I would appreciate for any help. I can post any conf or log details if you need.
Update#1
(I did some investigation after the first answer)
I checked the agent settings: Read only and Manual account creation are set to no
The account was set to unmanaged type, but I succeeded in changing its type to shared and then to single without getting an error. Now it is set to single
The task queue is empty.
Also I've checked that update method is present and update properties are set correctly. updateParams is not set (it means that all attributes should be sent to the managed system).
But when I change status of the account (from Enable to Disable), nothing happens.
In the console log I can see only these lines
14-Sep-2021 13:26:29.708 INFO [BPM-Scheduler:192.168.7.121:1] com.soffid.iam.bpm.job.JobExecutorThread.run No job to execute
When I manually run the task Analize impact for changes on Schrift, Execution log shows
Changes detected for accounts
=============================
NO CHANGE DETECTED
Changes detected for roles
=============================
NO CHANGE DETECTED
Update#2
After many attempts I made some progress. Now when I make some changes in the account, the task named UpdateAccount baklykov#irf.com.ua#Schrift appears, but runs with an error.
At first it was 415 Unsupported Media Type error as I wrote in comments, but now it looks a little different
Throws exception updating object : Extensible object [type = account]
EmployeeEmail: baklykov#irf.com.ua
IsLockedOut: true (log truncated) ...
caused by Unexpected response, Content-Type: null
Update#3
I found out that soffid's request for updating the object was in improper format (all the parameters were passed in the html request instead of putting them in json body)
After researching I found a method's property called Encoding and set it to application/json value.
Now the parameters are passed in json body (that's what I need), but now the problem is that soffid puts all the parameters in json body, including the key parameter by which the object for updating should be determined. My guess this is the reason why the object in the target system is still not updated.
In other words my application expects a request like this:
https://myapp.mysite.com/api/v1/Soffid/Employees?EmployeeEmail=baklykov%40irf.com.ua :
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan"}
but Soffid sends this:
https://myapp.mysite.com/api/v1/Soffid/Employees:
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan","EmployeeEmail":"baklykov#irf.com.ua"}
The system should have created a UpdateAccount task in the task queue. Please, verify:
The task engine is in automatic mode. In read-only or manual mode, no task will be created.
If you are updating an account, check the account is not set as unmanaged. In that case, no tasks is created.
Finally, verify the task queue has not held the task up.
Have you checked the engine mode? Look at Main Menu > Administration > Configure Soffid > Integration engine > Smart engine settings
It should be set to automatic.

com.hierynomus.mssmb2.SMBApiException: STATUS_NETWORK_NAME_DELETED exception

I am getting the following error stack trace, when I am trying to connect an SMB share that I connect using the library most of the time, so yes the code mostly works but sometimes not. I could not try anything yet because I do not have any idea about what can be wrong, the share config, the network or the code, no idea.
com.hierynomus.mssmb2.SMBApiException: STATUS_NETWORK_NAME_DELETED (0xc00000c9): Authentication failed for 'your-user' using com.hierynomus.smbj.auth.NtlmAuthenticator#565d98da
com.hierynomus.smbj.connection.Connection.authenticate(Connection.java:182)
Here is my SmbConfig and I am using the 0.9.1 of smbj.
SmbConfig config = SmbConfig.builder()
.withMultiProtocolNegotiate(true)
.withSigningRequired(true)
.withDfsEnabled(true)
.build();
Here https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-smb/6ab6ca20-b404-41fd-b91a-2ed39e3762ea it gives more information about 0xC00000C9 - STATUS_NETWORK_NAME_DELETED exception and says
The network name specified by the client has been deleted on the
server. This error is returned if the client specifies an incorrect
TID or the share on the server represented by the TID was deleted.
Should I think something happened to the share on the windows server during the execution of the code?

Google Cloud SQL No Response

We are running a Sails.js API on Google Container Engine with a Cloud SQL database and recently we've been finding some of our endpoints have been stalling, never sending a response.
I had a health check monitoring /v1/status and it registered 100% uptime when I had the following simple response;
status: function( req, res ){
res.ok('Welcome to the API');
}
As soon as we added a database query, the endpoint started timing out. It doesn't happen all the time, but seemingly at random intervals, sometimes for hours on end. This is what we have changed the query to;
status: function( req, res ){
Email.findOne({ value: "someone#example.com" }).then(function( email ){
res.ok('Welcome to the API');
}).fail(function(err){
res.serverError(err);
});
}
Rather suspiciously, this all works fine in our staging and development environments, it's only when the code is deployed in production that the timeout occurs and it only occurs some of the time. The only thing that changes between staging and production is the database we are connecting to and the load on the server.
As I mentioned earlier we are using Google Cloud SQL and the Sails-MySQL adapter. We have the following error stacks from the production server;
AdapterError: Invalid connection name specified
at getConnectionObject (/app/node_modules/sails-mysql/lib/adapter.js:1182:35)
at spawnConnection (/app/node_modules/sails-mysql/lib/adapter.js:1097:7)
at Object.module.exports.adapter.find (/app/node_modules/sails-mysql/lib/adapter.js:801:16)
at module.exports.find (/app/node_modules/sails/node_modules/waterline/lib/waterline/adapter/dql.js:120:13)
at module.exports.findOne (/app/node_modules/sails/node_modules/waterline/lib/waterline/adapter/dql.js:163:10)
at _runOperation (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/operations.js:408:29)
at run (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/operations.js:69:8)
at bound.module.exports.findOne (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/basic.js:78:16)
at bound [as findOne] (/app/node_modules/sails/node_modules/lodash/dist/lodash.js:729:21)
at Deferred.exec (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:501:16)
at tryCatcher (/app/node_modules/sails/node_modules/waterline/node_modules/bluebird/js/main/util.js:26:23)
at ret (eval at <anonymous> (/app/node_modules/sails/node_modules/waterline/node_modules/bluebird/js/main/promisify.js:163:12), <anonymous>:13:39)
at Deferred.toPromise (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:510:61)
at Deferred.then (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:521:15)
at Strategy._verify (/app/api/services/passport.js:31:7)
at Strategy.authenticate (/app/node_modules/passport-local/lib/strategy.js:90:12)
at attempt (/app/node_modules/passport/lib/middleware/authenticate.js:341:16)
at authenticate (/app/node_modules/passport/lib/middleware/authenticate.js:342:7)
at Object.AuthController.login (/app/api/controllers/AuthController.js:119:5)
at bound (/app/node_modules/sails/node_modules/lodash/dist/lodash.js:729:21)
at routeTargetFnWrapper (/app/node_modules/sails/lib/router/bind.js:179:5)
at callbacks (/app/node_modules/sails/node_modules/express/lib/router/index.js:164:37)
Error (E_UNKNOWN) :: Encountered an unexpected error :
Could not connect to MySQL: Error: Pool is closed.
at afterwards (/app/node_modules/sails-mysql/lib/connections/spawn.js:72:13)
at /app/node_modules/sails-mysql/lib/connections/spawn.js:40:7
at process._tickDomainCallback (node.js:381:11)
Looking at the errors alone, I'd be tempted to say that we have something misconfigured. But the fact that it works some of the time (and has previously been working fine!) leads me to believe that there's some other black magic at work here. Our Cloud SQL instance is D0 (though we've tried upping the size to D4) and our activation policy is "Always On".
EDIT: I had seen others complain about Google Cloud SQL eg. this SO post and I was suspicious but we have since moved our database to Amazon RDS and we are still seeing the same issues, so it must be a problem with sails and the mysql adapter.
This issue is leading to hours of downtime a day, we need it resolved, any help is much appreciated!
This appears to be a sails issue, and not necessarily related to Cloud SQL.
Is there any way the QPS limit for Google Cloud SQL is being reached? See here: https://cloud.google.com/sql/faq#sizeqps
Why is my database instance sometimes slow to respond?
In order to minimize the amount you are charged for instances on per use billing plans, by default your instance becomes passive if it is not accessed for 15 minutes. The next time it is accessed there will be a short delay while it is activated. You can change this behavior by configuring the activation policy of the instance. For an example, see Editing an Instance Using the Cloud SDK.
It might be related to your policy setting. If you set it to ON_DEMAND, the instance will sleep to save your budget so that the first query to activate the instance is slow. This might cause the timeout.
https://cloud.google.com/sql/faq?hl=en

Fiware CEP server stops responding

In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a different approach to developing?
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.

call to undefined function ejabberd_logger:info_msg

I was trying to log chat messages to mysql in Ejabberd 14.07 using mod_log_chat_mysql5 in Ubuntu.
Ejabberd is already configured in such a way to store basic details in Mysql. That feature is working and I am able to see newly registered users, offline messages etc in Mysql DB.
When mod_log_chat_mysql5 is enabled, Ejabberd is starting but following error message is logged and chat tables are not populated. Please help ....
[error] <0.433.0> CRASH REPORT Process <0.433.0> with 0 neighbours
exited with reason: call to undefined function
ejabberd_logger:info_msg(mod_log_chat_mysql5, 62, "Starting ~p",
[mod_log_chat_mysql5]) in gen_server:init_it/6 line 328
The error message says that you call ejabberd_logger:info_msg/4 which is not defined.
Have a look to your versions, it seem that in the past the include file ejabberd.hrl defined the macro:
-define(INFO_MSG(Format, Args),
ejabberd_logger:info_msg(?MODULE,?LINE,Format, Args)).
it is no more the case in 14.07. The macro is now defined in logger.hrl and is expanded to
lager:info(Format, Args))
or
p1_logger:info_msg(?MODULE, ?LINE, Format, Args))
depending on the flag LAGER