I am trying to write tests for an express API that uses Axios and is connected to a mySQL database. I am getting the following error when I run my tests in Jest:
A worker process has failed to exit gracefully and has been force exited. This is
likely caused by tests leaking due to improper teardown. Try running with --
detectOpenHandles to find leaks. Active timers can also cause this, ensure that
.unref() was called on them.
After running detectOpenHandles, I get the following:
Jest has detected the following 3 open handles potentially keeping Jest from exiting:
● TCPWRAP
15 |
16 |
> 17 | const connection = mysql.createConnection(process.env.MYSQL_CONNECTION)
| ^
18 |
19 | /**
20 | * #swagger
at new Connection (node_modules/mysql2/lib/connection.js:45:27)
at Object.createConnection (node_modules/mysql2/index.js:10:10)
at Object.<anonymous> (src/users/router.js:17:26)
at Object.<anonymous> (src/index.js:9:25)
at Object.<anonymous> (src/server.js:1:13)
at Object.<anonymous> (__tests__/app.test.js:3:13)
● TCPSERVERWRAP
3 |
4 |
> 5 | app.listen(serverPort, () =>
| ^
6 | console.log(`API Server listening on port ${serverPort}`)
7 | );
8 |
at Function.listen (node_modules/express/lib/application.js:618:24)
at Object.<anonymous> (src/server.js:5:5)
at Object.<anonymous> (__tests__/app.test.js:3:13)
● TLSWRAP
16 | const getMgmtApiJwt = async () => {
17 | try {
> 18 | const resp = await axios(newRequest);
| ^
19 | return resp.data
20 | } catch (e) {
21 | console.log("did not work");
at RedirectableRequest.Object.<anonymous>.RedirectableRequest._performRequest (node_modules/follow-redirects/index.js:279:24)
at new RedirectableRequest (node_modules/follow-redirects/index.js:61:8)
at Object.request (node_modules/follow-redirects/index.js:487:14)
at dispatchHttpRequest (node_modules/axios/lib/adapters/http.js:202:25)
at httpAdapter (node_modules/axios/lib/adapters/http.js:46:10)
at dispatchRequest (node_modules/axios/lib/core/dispatchRequest.js:53:10)
at Axios.request (node_modules/axios/lib/core/Axios.js:108:15)
at axios (node_modules/axios/lib/helpers/bind.js:9:15)
at getMgmtApiJwt (src/users/controller.js:18:24)
at Object.<anonymous> (__tests__/app.test.js:182:24)
What can I try next?
Implement a global tear down setup. That should fix this issue.
Take a look at Fiehra's answer here: jest and mongoose - jest has detected opened handles
Related
My MariaDB server is timing out my C++ client (using libmariadb) after 600 seconds (10 minutes) of inactivity, and I'm not sure why, because I can't find any configured timeouts that specify that number.
Here's my code, where I execute a simple SELECT query, wait 11 minutes, then run that same query again and get a "server gone" error:
#include <iostream>
#include <unistd.h>
#include <errmsg.h>
#include <mysql.h>
int main(int, char**)
{
// connect to the database
MYSQL* connection = mysql_init(NULL);
my_bool reconnect = 0;
mysql_options(connection, MYSQL_OPT_RECONNECT, &reconnect); // don't implicitly reconnect
mysql_real_connect(connection, "127.0.0.1", "testuser", "password",
"my_test_db", 3306, NULL, 0);
// run a simple query
mysql_query(connection, "select 5");
mysql_free_result(mysql_store_result(connection));
std::cout << "First query done...\n";
// sleep for 11 minutes
sleep(660);
// run the query again
if(! mysql_query(connection, "select 5"))
{
std::cout << "Second query succeeded after " << seconds << " seconds\n";
mysql_free_result(mysql_store_result(connection));
}
else
{
if(mysql_errno(connection) == CR_SERVER_GONE_ERROR)
{
// **** this happens every time ****
std::cout << "Server went away after " << seconds << " seconds\n";
}
}
// close the connection
mysql_close(connection);
connection = nullptr;
return 0;
}
The stdout of the server process reports that it timed out my connection:
$ sudo journalctl -u mariadb
...
Jul 24 17:58:31 myhost mysqld[407]: 2018-07-24 17:58:31 139667452651264 [Warning] Aborted connection 222 to db: 'my_test_db' user: 'testuser' host: 'localhost' (Got timeout reading communication packets)
...
Looking at a tcpdump capture, I can also see the server sending the client a TCP FIN packet, which closes the connection.
The reason I'm stumped is because I haven't changed any of the default timeout values, none of which are even 600 seconds:
MariaDB [(none)]> show variables like '%timeout%';
+-------------------------------------+----------+
| Variable_name | Value |
+-------------------------------------+----------+
| connect_timeout | 10 |
| deadlock_timeout_long | 50000000 |
| deadlock_timeout_short | 10000 |
| delayed_insert_timeout | 300 |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_print_lock_wait_timeout_info | OFF |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| thread_pool_idle_timeout | 60 |
| wait_timeout | 28800 |
+-------------------------------------+----------+
So why is the server timing out my connection? Based on the documentation, I would have thought it would have been because of the wait_timeout server variable, but it's left at the default of 8 hours...
BTW I'm using MariaDB 10.0 and libmariadb 2.0 (from the Ubuntu Xenial Universe repo)
Edit: here's an image of a tcpdump capture catching the disconnect. My Wireshark filter is tcp.port == 55916, so I'm looking at traffic to/from this one client connection. The FIN packet that the server sends is packet 1199, exactly 600 seconds after the previous packet (884).
wait_timeout is tricky. From the same connection do
SHOW SESSION VARIABLES LIKE '%timeout%';
SHOW SESSION VARIABLES WHERE VALUE BETWEEN 500 AND 700;
You should be able to workaround the issue by executing
mysql_query("SET ##wait_timeout = 22222");
Are you connected as 'root' or not?
More connector details:
See: https://dev.mysql.com/doc/refman/5.5/en/mysql-options.html
CLIENT_INTERACTIVE: Permit interactive_timeout seconds of inactivity (rather than wait_timeout seconds) before closing the connection. The client's session wait_timeout variable is set to the value of the session interactive_timeout variable.
https://dev.mysql.com/doc/relnotes/connector-cpp/en/news-1-1-5.html (MySQL Connector/C++ 1.1.5)
It is also possible to get and set the statement execution-time limit using the MySQL_Statement::getQueryTimeout() and MySQL_Statement::setQueryTimeout() methods.
There may also be a TCP/IP timeout.
I'm not sure about the exact reason. But I'm sure wait_timeout is not the only thing which has an effect on this. According to the only error message you have included in your question, it seems like there was a problem reading the packet.
Got timeout reading communication packets
I believe it was more like MariaDB had an issue reading the packet rather than attempting to connect or so. I also had a look at the MariaDB client library, and found this block;
if (ma_net_write_command(net,(uchar) command,arg,
length ? length : (ulong) strlen(arg), 0))
{
if (net->last_errno == ER_NET_PACKET_TOO_LARGE)
{
my_set_error(mysql, CR_NET_PACKET_TOO_LARGE, SQLSTATE_UNKNOWN, 0);
goto end;
}
end_server(mysql);
if (mariadb_reconnect(mysql))
goto end;
if (ma_net_write_command(net,(uchar) command,arg,
length ? length : (ulong) strlen(arg), 0))
{
my_set_error(mysql, CR_SERVER_GONE_ERROR, SQLSTATE_UNKNOWN, 0);
goto end;
}
}
https://github.com/MariaDB/mariadb-connector-c/blob/master/libmariadb/mariadb_lib.c
So it seems like it sets the error code to server gone away when it get a packet size issue. I suggest you to change the max_allowed_packet variable to some large value and see whether it has any effect.
SET ##global.max_allowed_packet = <some large value>;
https://mariadb.com/kb/en/library/server-system-variables/#max_allowed_packet
I hope it will help, or at least it will set you in some path to solve the problem :) and finally, I think you should handle the disconnects in your code rather than relying on the timeouts.
Galera cluster with Haproxy Load balancing. Change this parameter on haproxy
settings
defaults
timeout connect 10s
timeout client 30s
timeout server 30s
i'm working on a google cloud project and i get this error when i run node index.js / try to access mysql database remotely.
this is the complete error message:
Error: ER_ACCESS_DENIED_ERROR: Access denied for user 'root'#'external_ip' (using password: YES)
at Handshake.Sequence._packetToError (/home/it21695/nodeproject/node_modules/mysql/lib/protocol/sequences/Sequence.js:52:14)
at Handshake.ErrorPacket (/home/it21695/nodeproject/node_modules/mysql/lib/protocol/sequences/Handshake.js:130:18)
at Protocol._parsePacket (/home/it21695/nodeproject/node_modules/mysql/lib/protocol/Protocol.js:279:23)
at Parser.write (/home/it21695/nodeproject/node_modules/mysql/lib/protocol/Parser.js:76:12)
at Protocol.write (/home/it21695/nodeproject/node_modules/mysql/lib/protocol/Protocol.js:39:16)
at Socket.<anonymous> (/home/it21695/nodeproject/node_modules/mysql/lib/Connection.js:103:28)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:277:12)
at readableAddChunk (_stream_readable.js:262:11)
at Socket.Readable.push (_stream_readable.js:217:10)
--------------------
at Protocol._enqueue (/home/it21695/nodeproject/node_modules/mysql/lib/protocol/Protocol.js:145:48)
at Protocol.handshake (/home/it21695/nodeproject/node_modules/mysql/lib/protocol/Protocol.js:52:23)
at Connection.connect (/home/it21695/nodeproject/node_modules/mysql/lib/Connection.js:130:18)
at Object.<anonymous> (/home/it21695/nodeproject/index.js:10:12)
at Module._compile (internal/modules/cjs/loader.js:678:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
at Module.load (internal/modules/cjs/loader.js:589:32)
at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
at Function.Module._load (internal/modules/cjs/loader.js:520:3)
at Function.Module.runMain (internal/modules/cjs/loader.js:719:10)
this is my index.js code (the test DB and books table both exist):
var mysql = require('mysql');
var connection = mysql.createConnection({
host : 'external_ip',
user : 'root',
password : 'password',
database : 'test',
});
connection.connect();
connection.query('SELECT * from books', function (error, results, fields) {
if (error) throw error;
console.log(results);
});
connection.end();
i have added the mysql port (3306) to the firewall exceptions and i've granted privileges for root user. i turned the mysql and node.js external ips to static. i use the passwords that google cloud has assigned.
+------------------+----------------+------------+
| user | host | grant_priv |
+------------------+----------------+------------+
| root | % | Y |
| root | external_ip | Y |
| mysql.infoschema | localhost | N |
| mysql.session | localhost | N |
| mysql.sys | localhost | N |
| root | localhost | Y |
| stats | localhost | N |
+------------------+----------------+------------+
mysql v8.0.11node.js v10.1.0npm v5.6.0
Before checking the application any further make sure to check the user credentials and access privileges on the database host server.
This problem is related to permissions not with nodejs.
First of all i recommend you to install MySQL Workbech and perform your test from that tool.
Also you need to make some configurations into your MySQL database to allow those remote root connections, take a look at this post:
How to allow remote connection to mysql
I am using couchbase image with base version 4.6.3 in docker-compose . But I am getting the below error while launching docker-compose up : -
couchbase_1 | SUCCESS: init/edit couchbase.docker
couchbase_1 | SUCCESS: set hostname for couchbase.docker
couchbase_1 | SUCCESS: bucket-create
couchbase_1 | ....2017-11-13 09:57:06,301: w0 Fail to read json file with error:No JSON object could be decoded
couchbase_1 | .
couchbase_1 | bucket: ., msgs transferred...
couchbase_1 | : total | last | per sec
couchbase_1 | byte : 251515 | 251515 | 3987233.9
couchbase_1 | done
couchbase_1 | /entrypoint.sh couchbase-server
Making the Json file to compact single line resolved the error.
We've discovered a bug in the IoT Agent Ultralight.
If we try to send a measure to a non existing device, we'll get a 404 - DEVICE_NOT_FOUND error but at the same time a device without any attribute will be created in IoTA's and Orion CB's database.
When I say a device without any attribute I refer to the following:
{
"device_id": "test",
"service": "MyService",
"service_path": "/MyServicePath",
"entity_name": "MyEntity:test",
"entity_type": "MyEntity",
"attributes": [],
"lazy": [],
"commands": [],
"static_attributes": []
}
This is a very important bug, because it's really simple to create as many devices as someone wants and that could eat our database space.
Does someone know how to solve it?
Which version of IotAgent-ul and external components, I mean, (Fiware OCB and MongoDB) are you using? It's highly recommend to use the latest one.
It's a good way to run OCB and MongoDB using docker images
Could you provided us your commands/code that you used to send a meassure?
From my self, I tested it and it works great:
Curl commands used to send a meassure:
#!/usr/local/bin
mosquitto_pub -t /TEF/sensor03/attrs -m 't|45|c|extreme'
Response from IotAgent-ul:
{"op":"IOTAUL.Executable","time":"2018-09-21T09:59:17.906Z","lvl":"INFO","msg":"Ultralight 2.0 IoT Agent started"}
time=2018-09-21T09:59:32.679Z | lvl=DEBUG | corr=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | trans=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | op=IoTAgentNGSI.MongoDBDeviceRegister | srv=n/a | subsrv=n/a | msg=Looking for device with filter [{"id":"sensor03"}]. | comp=IoTAgent
Mongoose: mpromise (mongoose's default promise library) is deprecated, plug in your own promise library instead: http://mongoosejs.com/docs/promises.html
time=2018-09-21T09:59:32.717Z | lvl=ERROR | corr=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | trans=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | op=IOTAUL.IoTUtils | srv=n/a | subsrv=n/a | msg=MEASURES-001: Couldn't find device data for APIKey [TEF] and DeviceId[sensor03] | comp=IoTAgent
time=2018-09-21T09:59:32.717Z | lvl=ERROR | corr=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | trans=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | op=IOTAUL.Common.Binding | srv=n/a | subsrv=n/a | msg=MEASURES-005: Error before processing device measures [/TEF/sensor03/attrs] | comp=IoTAgent
time=2018-09-21T10:09:51.484Z | lvl=DEBUG | corr=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | trans=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | op=IoTAgentNGSI.MongoDBDeviceRegister | srv=n/a | subsrv=n/a | msg=Looking for device with filter [{"id":"sensor03"}]. | comp=IoTAgent
time=2018-09-21T10:09:51.504Z | lvl=ERROR | corr=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | trans=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | op=IOTAUL.IoTUtils | srv=n/a | subsrv=n/a | msg=MEASURES-001: Couldn't find device data for APIKey [TEF] and DeviceId[sensor03] | comp=IoTAgent
time=2018-09-21T10:09:51.509Z | lvl=ERROR | corr=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | trans=dbbcd94d-50fc-4062-a05b-bccfa76c52c8 | op=IOTAUL.Common.Binding | srv=n/a | subsrv=n/a | msg=MEASURES-005: Error before processing device measures [/TEF/sensor03/attrs] | comp=IoTAgent
Show data from MongoDB
root#727724bdd3d9:/# mongo --shell
MongoDB shell version: 3.2.21
connecting to: test
type "help" for help
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
> show dbs
iotagentul 0.000GB
local 0.000GB
orion 0.000GB
It seems to me that you have something wrong about your config set up. Please provide us all the points before comment and for a nice follow up, could you open it in the Github site ? IotAgent-UL Take up there
Thanks,
Fernando Méndez - Research Junior Software Engineer
I am following the free online book "Getting Started with Grails" (http://www.infoq.com/minibooks/grails-getting-started) and I am getting a java.lang.ClassCastException when trying to list any domain class. Can anyone decipher this?
URI: /RaceTrack/runner/list
Class: java.lang.ClassCastException
Message: sun.proxy.$Proxy26 cannot be cast to org.springframework.orm.hibernate3.HibernateCallback
Stack trace:
Line | Method
->> 15 | list in RunnerController.groovy
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 186 | doFilter in PageFragmentCachingFilter.java
| 63 | doFilter in AbstractFilter.java
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 722 | run in java.lang.Thread
Additional info (around line 186 of PageFragmentCachingFilter.java)
183: if(method == null) {
184: log.debug("No cacheable method found for {}:{} {}",
185: new Object[] { request.getMethod(), request.getRequestURI(), getContext() });
186: chain.doFilter(request, response);
187: return;
188: }
189: Collection<CacheOperation> cacheOperations = cacheOperationSource.getCacheOperations(
Additional info (around line 63 of AbstractFilter.java):
60: try {
61: // NO_FILTER set for RequestDispatcher forwards to avoid double gzipping
62: if (filterNotDisabled(request)) {
63: doFilter(request, response, chain);
64: }
65: else {
66: chain.doFilter(req, res);
I've had the same issue happening all of a sudden couple days back. Deleting ~/.grails/2.0.4/.slcache/ directory fixes it for me.
Delete .slcache in both the top of the .grails subdirectory and also the .slcache, if it exists, for the particular version of grails being used. For example, ~/.grails/2.1.3/.slcache.
This worked when using IntelliJ IDEA to launch the app.
Does the app start up with reloading (spring-loaded agent) disabled?
grails -noreloading run-app
A similar problem has been reported to Grails Jira as GRAILS-9952. It would help fixing the problem if you can provide an test app that reproduces the problem. Please attach that to the jira issue.