I'm trying to cluster a node app that runs with mysql, mysql session store and massive usage of socket.io.
I was looking for a way to adapt the socket.io session storage in mysql but i only found some redis adapter (following this guidelines : https://github.com/elad/node-cluster-socket.io)
Is there a way to implement node clustering with socket.io and mysql without installing redis ?
Have you tried using this module? https://www.npmjs.com/package/socket.io-mysql-session If it does not satisfy you, you will have to make your own socket.io session manager.This will not take more than 2-3 hours. I believe mysql is not exactly what you want to use to store socket.io data as a user is lost in every refresh meaning that a caching mechanism like redis is a better choice to use overall
Related
I want to test Node and Deno and try to redirect users via proxy to one MySQL DB.
How will it impact the database?
Can some timestamp conflicts via CRUD operations arise or does MySQL have some mechanism to cope with connections from multiple servers?
What about performance or memory footprint of the database in RAM? Will it be occupying the same amount of space as if there was only one server requesting the database to CRUD something?
What would happen if I added another server that will connect to the DB, for example, java or Go server?
It will virtually have no impact on the database other than having any other concurrent processes connecting to it.
This is not a deno issue but rather a database issue.
The exact same problems can happen even with your current single Node.js instance, because the nature of all systems these days is concurrent/parallel.
You might as well replace the Deno app with another Node.js instance, Java, etc. Or even your current Node.js app.
Data in a database can change once you loaded it to the client, and it is up to you to implement the code that will handle such scenarios.
The fact that MySQL is not "ACID" is neither negative nor relevant in and of itself because it is doesn't have context.
If you need complete absolute integrity on a registry make sure you lock it when you select it, but there will be a trade off.
I am creating a rest api that uses mysql as data base. My confusion is that should i connect to database in every request and release the connection at the end of the operation. Or should i connect the database at the start of the server and make it globally available and forget about releasing the connection
I would caution that neither option is quite wise.
The advantage of creating one connection for each request is that those connections can interact with your database in parallel, this is great when you have a lot of requests coming through.
The disadvantage (and the reason you might just create one connection on startup and share it) is obviously the setup cost of establishing a new connection each time.
One option to look into is connection pooling https://en.wikipedia.org/wiki/Connection_pool.
At a high level you can establish a pool of open connections on startup. When you need to make a request remove one of those connections from the pool, use it, and return it when done.
There are a number of useful Node packages that implement this abstraction, you should be able to find one if you look.
I have a Rails 4.x application running on server A and MySQL on server B.
Using ab to do a load test of my API calls I notice that the MySQL server is showing CPU activity. So I go back to the code and check, but no SQL statements are triggered, to be sure I also deactivate all before filters, but still the MySQL server shows CPU load.
I went to MySQL and run
show processlist;
but that also shows no active SQL statements
Why would there be load on the DB server?
A Rails application initializes connection pools to the configured database on app load and also loads basic schema data for each ActiveModel defined to populate runtime mappings from the DB to instances of that model.
These connections/queries will happen as soon as you have loaded the application and running traffic.
If this is not what is responsible for the activity on your database server, you will need to use other tools to see what is responsible. For example, NewRelic's system monitoring tools are great for snapshotting CPU/memory usage over time correlated to what processes were running. This will help you rule out MySQL itself using resources vs. other things running on the DB server.
According to this article, storage engines like Innodb may have their own per thread and/or global memory allocations which is probably accounting for the CPU overhead. If this is a stock (non-tuned) MySql install, you're probably just seeing baseline CPU activity. The article mentions a number of places to look that might indicate areas that can be tuned to reduce this footprint.
I'm trying to test the performance of using memcached on a MySQL server to improve performance.
I want to be able to use the normal MySQL command line, but I can't seem to get it to connect to memcached, even when I specify the right port.
I'm running the MySQL command on the same machine as both the memcached process and the MySQL server.
I've looked around online, but I can't seem to find anything about using memcached other than with program APIs. Any ideas?
Memcached has its own protocol. The MySQL client cannot connect directly to a memcached server.
You may be thinking of the MySQL 5.6 feature that allows MySQL server to respond to connections using a memcached-compatible protocol, and read and write directly to InnoDB tables. See http://dev.mysql.com/doc/refman/5.6/en/innodb-memcached.html
But this does not allow MySQL clients to connect to memcached -- it's the opposite, allowing memcached clients to connect to mysqld.
Re your comment:
The InnoDB memcached interface is not really a caching solution per se, it's a solution for using a familiar key/value API for persistent data in InnoDB tables. InnoDB does do transparent caching of data pages in its buffer pool, but this is no different from conventional data reads with SQL. InnoDB also commits all changes to its transaction log synchronously on commit.
Here's a blog from my colleague at Percona. He tested whether the MySQL 5.6 memcached API could be used as a caching layer, and found that actually using memcached is still superior.
http://www.mysqlperformanceblog.com/2013/03/29/mysql-5-6-innodb-memcached-plugin-as-a-caching-layer/
Here's one conclusion from that blog:
As expected, there is a slowdown for write operations when using the InnoDB version. But there is also a slight increase in the average fetch time.
I'm trying to implement a proxy layer in front of MySQL server, that will catch redundant SQL queries and send them only once to the server. In other words, I have many clients (in PHP, Perl, on different web nodes) that talk to the MySQL and very often repeat the same SELECT queries. When traffic goes up MySQL, very often, goes down.
The question is - are you aware of any open source (or commercial) tool that can help? I tried MySQL Proxy, but looks like it can't help.
Two suggestions:
MySQL Proxy
This is a front end proxy from MySQL which does what you want as far as I know
vtocc
From the vitess project, used in the YouTube mysql environment, also does a similar thing. Query consolidation: The ability to reuse the results of an in-flight query to any subsequent requests that were received while the query was still executing.
You may want to look into HAProxy and how it works.
Here two additional suggestions
SUGGESTION #1 Setup a Cluster
If your data is all InnoDB, you should try Percona XtraDB Cluster and use HAProxy in conjunction with it. You can load balance across all server in the Cluster including the Write Master.
SUGGESTION #2 Setup a Cluster via MySQL Replication to 1 or more DB Servers
Use HAProxy to load balance your reads across the Read Slaves
If you are on a budget and your data is relatively small, setup multiple MySQL Instances on one server