Using the erlang mysql module, how is a database connection closed? - mysql

In using the erlang mysql module the exposed external functions are:
%% External exports
-export([start_link/5,
start_link/6,
start_link/7,
start_link/8,
start/5,
start/6,
start/7,
start/8,
connect/7,
connect/8,
connect/9,
fetch/1,
fetch/2,
fetch/3,
prepare/2,
execute/1,
execute/2,
execute/3,
execute/4,
unprepare/1,
get_prepared/1,
get_prepared/2,
transaction/2,
transaction/3,
get_result_field_info/1,
get_result_rows/1,
get_result_affected_rows/1,
get_result_reason/1,
encode/1,
encode/2,
asciz_binary/2
]).
From the this this, it is not apparent how to close a connection.
How a connection closed?

I quickly browsed through the mysql_driver code. You're right - it doesn't seem to have a mechanism to close opened connections. In fact I actually don't even see proper clean-up code to close the open sockets when a gen_server let's say gets shutdown (in the terminate method).

{Type, Result} = mysql:start_link(P1, Host, User, Passwd, DB),
stop(Result) closes the connection

Related

Avoid MySQL Connection during Websocket

I have a question regarding the flow of go lang code.
In my main function, I am opening mysql connection and then using `defer" to close the connection at the end of the connection.
I have route where WebSocket is set up and used.
My Question is will program open connection every time, WebSocket is used to send and receive a message or will it just open once the page was loaded.
Here is how my code looks like:-
package main
import (
// Loading various package
)
func main() {
// Opening DB connection -> *sql.DB
db := openMySql()
// Closing DB connection
defer db.Close()
// Route for "websocket" end point
app.Get("/ws", wsHandler(db))
// Another route using "WebSocket" endpoint.
app.Get("/message", message(db))
}
Now, while a user is at "message" route, whenever he is sending the message to other users, Will mysql - open and close connection event will happen every time, when the message is being sent and receive using "/ws" route?
Or will it happen Just once? whenever "/message" route and "/ws" event is called the first time.
My Purpose of using "db" in "wsHandler" function is to verify and check if the user has permission to send a message to the particular room or not.
But there is no point opening and closing connection every second while WebSocket emits message or typing event.
What would be the best way to handle permission checking in "/ws" route, if above code is horror? Considering a fact there will be few hundred thousand concurrent users.
Assuming db is *sql.DB your code seems fine, I'm also assuming that your example is incomplete and your main does not actually return right away.
The docs on Open state:
The returned DB is safe for concurrent use by multiple goroutines and
maintains its own pool of idle connections. Thus, the Open function
should be called just once. It is rarely necessary to close a DB.
So wsHandler and message should be ok to use it as they please as long as they don't close DB themselves.

Fiware CEP server stops responding

In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a different approach to developing?
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.

Python 3.4 Sockets sendall function

import socket
def functions():
print ("hello")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('192.168.137.1', 20000)
sock.bind(server_address)
sock.listen(1)
conn, addr = sock.accept()
print ('Connected by', addr)
sock.listen(1)
conn.sendall(b"Welcome to the server")
My question is how to send a function to the client,
I know that conn.sendall(b"Welcome to the server") will data to the client.
Which can be decoded.
I would like to know how to send a function to a client like
conn.sendall(function()) - this does not work
Also I would like to know the function that would allow the client to receive the function I am sending
I have looked on the python website for a function that could do this but I have not found one.
The functionality requested by you is principally impossible unless explicitly coded on client side. If this were possible, one could write a virus which easily spreads into any remote machine. Instead, this is client right responsibility to decode incoming data in any manner.
Considering a case client really wants to receive a code to execute, the issue is that code shall be represented in a form which, at the same time,
is detached from server context and its specifics, and can be serialized and executed at any place
allows secure execution in a kind of sandbox, because a very rare client will allow arbitrary server code to do anything at the client side.
The latter is extremely complex topic; you can read any WWW browser security history - most of closed vulnerabilities are of issues in such sandboxing.
(There are environments when such execution is allowed and desired; e.g. Erlang cookie-based peering cluster. But, in such cluster, side B is also allowed to execute anything at side A.)
You should start with searching an execution environment (high-level virtual machine) which conforms to your needs in functionality and security. For Python, you'd look at multiprocessing module: its implementation of worker pools doesn't pass the code itself, but simplifies passing data for execution requests. Also, passing of arbitrary Python data without functions is covered with marshal and pickle modules.

my nodejs script is not exiting on its own after successful execution

I have written a script to update my db table after reading data from db tables and solr. I am using asyn.waterfall module. The problem is that the script is not getting exited after successful completion of all operations. I have used db connection pool also thinking that may be creating the script to wait infinitly.
I want to put this script in crontab and if it will not exit properly it would be creating a hell lot of instances unnecessarily.
I just went through this issue.
The problem with just using process.exit() is that the program I am working on was creating handles, but never destroying them.
It was processing a directory and putting data into orientdb.
so some of the things that I have come to learn is that database connections need to be closed before getting rid of the reference. And that process.exit() does not solve all cases.
When my project processed 2,000 files. It would get down to about 500 left, and the extra handles would have filled up the available working memory. Which means it would not be able to continue. Therefore never reaching the process.exit at the end.
On the other hand, if you close the items that are requesting the app to stay open, you can solve the problem at its source.
The two "Undocumented Functions" that I was able to use, were
process._getActiveHandles();
process._getActiveRequests();
I am not sure what other functions will help with debugging these types of issues, but these ones were amazing.
They return an array, and you can determine a lot about what is going on in your process by using these methods.
You have to tell it when you're done, by calling
process.exit();
More specifically, you'll want to call this in the callback from async.waterfall() (the second argument to that function). At that point, all your asynchronous code has executed, and your script should be ready to exit.
EDIT: As pointed out by #Aaron below, this likely has to do with something like a database connection being active, and not allowing the node process to end.
You can use the node module why-is-node-running:
Run npm install -D why-is-node-running
Add import * as log from 'why-is-node-running'; in your code
When you expect your program to exit, add a log statement:
afterAll(async () => {
await app.close();
log();
})
This will print a list of open handles with a stacktrace to find out where they originated:
There are 5 handle(s) keeping the process running
# Timeout
/home/maf/dev/node_modules/why-is-node-running/example.js:6 - setInterval(function () {}, 1000)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
# TCPSERVERWRAP
/home/maf/dev/node_modules/why-is-node-running/example.js:7 - server.listen(0)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
We can quit the execution by using:
connection.destroy();
If you use Visual Studio code, you can attach to an already running Node script directly from it.
First, run the Debug: Attached to Node Process command:
When you invoke the command, VS Code will prompt you which Node.js process to attach to:
Your terminal should display this message:
Debugger listening on ws://127.0.0.1:9229/<...>
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
Then, inside your debug console, you can use the code from The Lazy Coder’s answer:
process._getActiveHandles();
process._getActiveRequests();

Best way to connect to database for this application

I have a Delphi application which hits a database (usually MySql) every 60 seconds through a TTimer. The application is more or less an unattended bulletin board. If the network drops the application needs to continue running and connect back to the database when the connection is back. Often it might be over broadband, so chances are the connection is not always the best.
I am using the TAdoConnection component. This is opened at application startup and remains open. Whenever I need to make a new query I set the Connection to the open TAdoConnection. But I am finding this is not very reliable if there is a network drop.
What is the best way to connect to the database in this instance?
I have seen ways where you can build the connection string directly into the TAdoQuery. Would this be the proper way? Or is this excessively resource intensive? Sometimes I need to open 5-10 queries to get all the information.
Or how about doing this in the TTimer.OnTimer event:
Create TAdoConnection
Do All Queries
Free TAdoConnection
Thanks.
You should use single TAdoConnection object to avoid setting connection string to each component. Keep your connection object closed and open it when you need to access data. Something like this:
procedure OnTimer;
begin
MyAdoConnection.Open;
try
// Data access code here
...
finally
MyAdoConnection.Close;
end;
end;
You can additionally put another try/except block around MyAdoConnection.Open to catch situation where network is not available.
About second part of your question, best would be to put all your data access components in data module that you will create when you need to run data access procedures. Then you can put all your data access code in that data module and separate it from rest of the code.
You could try to open connection in OnCreate event of datamodule, but be careful to handle possible exceptions when opening connection. Close connection in OnDestroy event. Then you can use that datamodule like this:
procedure OnTimer;
var myDataModule : TMyDataModule;
begin
myDataModule := TMyDataModule.Create;
try
// Data access code here
myDataModule.DoSomeDatabaseWork;
finally
myDataModule.Free;
end;
end;