SublimeText(3) plug-in onload event? - sublimetext2

I'm trying to develop some plugin for SublimeText3.
The plugin use a node.js server, and I want it keep running as a single instance(service).
So, what I would design the plugin is
to try TCP connect on the plugin load, and
if the connection succeeded and received data, do nothing
if the connection failed, execute the node command to launch the new server instance
However, I cannot find any EventListener for plugin-onload Here
https://www.sublimetext.com/docs/3/api_reference.html
What is the common manner to achieve what I would like to do in SublimeText Plugin??
Please let me know. Thanks.

you can define a function named plugin_loaded, the code inside will be executed when the plugin is loaded
def plugin_loaded():
# your code here

Related

VerneMQ plugin_chain_exhausted Authentication MySQL

I have a running instance of VerneMQ (cluster of 2 nodes) on Google kubernets and using MySQL (CloudSQL) for Auth. Server accepts connections over TLS
It works fine, but after a few days i start seeing this message on the log:
can't authenticate client {[],<<"Client-id">>} from X.X.X.X:16609 due to plugin_chain_exhausted
The client app (paho) complains that the server refused the connection for being "not authorized (code=5 in paho error)"
after a few retry it finally connects. but every time it get's harder and harder until it just won't connect anymore
If i restart VerneMQ everything get's back to normal
I have only 3 clients currently connected at most, at the same time.
clients already connected have no issues in pub/sub.
In my configuration i have (among other things):
log.console.level=debug
plugins.vmq_diversity=on
vmq_diversity.mysql.* = all of them set
allow_anonymous=off
vmq_diversity.auth_mysql.enabled=on
it's like the server degrades over time. the status webpage reports no problem
My verne server was build from the git repository about a month ago and runs on a docker container
what could be the cause?
what else could i check to find posibles causes? maybe a diversity missconfiguration?
Tks
To quickly explain the plugin_chain_exhausted log: with Verne you can run multiple authentication/authorization plugins, and they will be checked in a chain. If one plugin allows the client, it will be in. If no plugin allows the client, you'll see the log above.
This does not explain the behaviour you describe, though. I don't think I have seen that.
In any case, the first thing to check is whether you actually run multiple plugins. For instance: have you disabled the vmq.passwd and the vmq.acl plugins?

Due to Zone.js in Angular Universal server rendering stops working. How to fix?

When in my requests for data I return simple JSON without connection to database the server rendering in Angular Universal works fine.
However I have found that requests to MySQL database is not going if I don't have a proper version of Zone.js.
I had the following error:
TypeError: Cannot read property 'on' of undefined
at Protocol._enqueue (/Applications/MAMP/htdocs/Angular2/myproject/node_modules/mysql/lib/protocol/Protocol.js:152:5)
...
Also I payed attention that I had a warning:
angular2-universal#2.1.0-rc.1 requires a peer of zone.js#~0.6.21 but none was installed.
So I installed proper Zone.js:
npm install zone.js#0.6.21
and I started receiving data from MySQL.
BUT! At this point server rendering stops working! I only see:
<!--template bindings={}-->
in HTML-template.
I moved back to return a JSON without connection to MySQL and have found that even at this case server rendering is not working.
So I have played a little with this and have found that if I use command:
npm install zone.js
then server rendering works properly when I return JSON without connection to database, but if I try to connect to MySQL then original error occurs.
So now I have either working server rendering or connection to MySQL without server rendering. Not server rendering and working connection to database.
If anyone knows what should be done I'll appreciate the help. Thank you.
I have found the solution. In my case the issue was resolved by the command:
npm install zone.js#latest
Now I can do the requests to MySQL and also I have a server rendering and can see all the data in my HTML-template.

Exit from socket in TCL without terminating application

I am trying to build a socket in TCL using the socket -server accept <port> command and then using the vwait forever loop - The simplest possible socket.
http://wiki.tcl.tk/15315
I am able to connect to the server fine but my issue is -- How do I close the socket when it is no longer needed without having to exit the application.
Some context on my application:
It is a Synopsys tool with a TCL shell.
I am planning on building a GUI using TK and ideally, I would like to develop it in Python for scalability reasons (plus the TK interface through Synopsys TCL shell is not the regular TCL/TK interface).
When the forever event loop is running, the shell is constantly listening - Making the application's own TCL user prompt unavailable - I am not expecting that the shell will be available when some command over the socket is running, but I do expect the shell to be available once the command over the socket is completed. (I understand that this will be a little complex to implement....but just putting the question out)
When I try using the exit command passed through the socket, the application (the entire one) is closed.
Is there any command that I can pass over the socket to close the socket only and not close the entire application?
Please let me know if more details are needed.
A call to exit will exit the interpreter and cause the process to close. You close sockets by using the close function on the channel. In a server application with a connected client there are two sockets. Once you have dealt with all transactions from the client your server code should call close on the client socket. This was provided to your code in the callback function you passed in when creating the server socket. The server socket also needs to be closed at some point. How you do that depends on your application and platform. On a unix system you might use an extension to trap a signal to call this close function. Or you might close the server in response to some input from a control socket or standard input channel. The interpreter will not be reading and parsing input unless you add code to do this. Using fileevent you can arrange to read from multiple channels and using info complete you can read from stdin and evaluate the input to get a REPL loop going for your server. An implementation of this can be found online and I imagine the wiki has some examples as well.

How to keep server and application separate

I have a nodejs Application running on server with node-mysql and express, At first I faced problem where some exceptions were not handled and the application would go down with network connectivity issues.
I handled all uncaught exceptions and the sever wouldn't go down this time but instead it would hang. I figured it was because I returned response only if query didn't raise any exception, so I handled all query related exceptions too.
Next if MySQL server terminate connection for some reason my application wouldn't reconnect, i tried reconnecting but it would give an error related to "enqueue connection handshake or something". From another stack question I was supposed to use connection pool so if server terminates connection it will regain connectivity some how, which I did.
My here question is that each time I faced an issue I had to shut down whole application and thanks to nodejs where server is configured programmatically goes down too. Can I or better yet how can I decouple my Server and Application almost completely so that if I make some change in my application I wouldn't have to re-deploy?
Specially for case that right now everything is okay and my application is constantly giving me connection pool error on server and in development version its working fine, so even if I restart my application I am not sure how will I face this problem again so I can properly diagnose this.
Let me know if anyone needs more info regarding my question.
Are you using a front-end framework to serve your application, or are you serving it all from server calls?
So fundamentally, if your server barfs for any reason (i.e. 500 error), you WANT to shut down and restart, because once your server is in that state, all of your in-transit data and your stack is in an unknown state. There's no way to correctly recover from that, so you are safer from both a server and an end-user point of view to shutdown the process and restart.
You can minimise the impact of this by using something like Node's Cluster module, which allows you to fork child processes of your server and generate multiple instances of the same server, connected to the same database, allowing access on the same port etc, therefore, if your user (or your server), manages to hit an unhandled exception, it can kill the process and restart without shutting down your entire server.
Edit: Here's a snippet:
var cluster = require('cluster');
var threads = require('os').cpus().length;
if(cluster.isMaster) {
for(var i = 0; i < threads; i++) {
cluster.fork();
}
cluster.on('exit', function(dead, code, signal) {
console.log('worker ' +dead.process.pid+ ' died.');
var worker = cluster.fork();
console.log('worker '+worker.process.pid+ ' started');
});
} else {
//
// do your server logic in here
}
That being said, there's no way for you to run up your application and server separately if Node is serving your client content. Once you terminate your server, your Endpoints are down. If you really wanted to be able to keep a client-side application active and reboot your server, you'd have to entirely separate the logic, i.e. have your Application in a different project to your server, and use your server as API endpoints only.
As for Connection Pools in Node-mysql: I have never used that module so I couldn't say what best practice is there.

Unable to create indexes in Sphinx after an emergency server restart [Can't create TCP/IP socket]

I'm trying to execute the command in the Windows console:
C:\SphinxSearch\bin\indexer --all --config C:\SphinxSearch\sphinx.conf
But I get an error:
ERROR: index 'indexname': sql_connect: Can't create TCP/IP socket
(10093) (DSN=mysql://root:*#localhost:3306/test).
A data source is mysql. Before the server restart everyone works fine.
How can I fix it?
I'm having the same error 10093. It's a windows error code by the way. In my case it occurs when trying to run the indexer through the system account via a scheduled task. If I'm running it directly as administrator, there's not a problem.
According to the site above:
Either your application hasn't called WSAStartup(), or WSAStartup() failed, or--possibly--you are accessing a socket which the current active task does not own (i.e. you're trying to share a socket between tasks).
In my case I'm thinking it might be the last one, some security problem due to user SYSTEM being used in my scheduled task. I was able to solve it by using my admin user instead: in the scheduled task, I set to use my local admin account with the option to "Run when user is logged on or not" and "Do not store password". I've also checked "Run with highest privileges". This seems to have done the trick as now my indexes are rotating on schedule.