I was wondering if there is a way to use mysql server through sockets.
What I want is connecting running queries and getting results by making socket connections and sending packets. Does anyone know how can I interact with mysql this way?
Regards
Almost every language has a MySQL client, so I'm not sure why you want to do this.
But, you would basically have to reimplement the client library. The protocol is by no means simple: http://forge.mysql.com/wiki/MySQL_Internals_ClientServer_Protocol.
You could perhaps write basic functionality, but once you get into all the features and corners of the protocol, it would be a project tremendous in scope (with no purpose).
Related
So I have an application that uses webRTC to set up a video chat between 2 browsers. I'm using node.js for the application and socket.io to make the handshake. I have made a successful test in the chrome browser in connecting the clients so now the next step is to allow this to be used by more people.
I was wondering what would be the best way to approach the server side of creating 'rooms' and connecting two people who would like to chat. In the test I just put the one who created the offer in an object and if the second person matched the first (by UUID or a keyword) then the connection would be made but what would be a better, more secure, and more fitting method to do this with more people?
My application currently has MySQL so should I make use of a table? I feel like that would be time consuming/too much calling to a db... Should I focus more on Socket.io? Can node's socket.io handle a lot of connections to the server well enough?
If my question isn't clear please tell me.
P.S. any GOOD tutorials or articles on setting up a webRTC connection on mozilla would be great. I can add the connection but for some reason adding the stream of the one who 'creates offer' isn't being sent over.
A "room" in WebRTC will just be use for signaling, a way of sending messages before the actual connection is initiated.
"My application currently has MySQL so should I make use of a table?" - that's not exactly the best approach. If your application uses a framework/technology/etc, it doesn't mean that you should use it everywhere. You should use a technology when it answers best your needs. If such a room has to have some persistent attributes, then it makes sense to make use of a persistent storage, such as a database.
I am making a Javafx program and need to use a small mySQL database. Currently I am hosting one on my computer but I can't access it on other computers on other networks. I need the mySQL server to be accessible from anywhere. How do I host one that does that? Thanks in advance, all help is welcome.
Well you have a few options depending on how important this MySQL database is to you, how you intend to connect to it from outside, and what you want to do with it.
The naive implementation would involve opening your firewall and directing all incoming traffic using whatever port you have configured MySQL for to point to the ip address of your server. If you do this you absolutely must secure your database with a password!!! You'll also need to keep the server's public ip address handy so you know how to find it when you go out.
Use Amazon AWS, Google Compute, Google App Engine, or some other cloud platform to host a MySQL instance. All the big players also tend to host pretty awesome RDBMS solutions. The advantage here is that you're not exposing your home computer to malice and you are connecting into an ecosystem that will answer a lot of other questions for you as they come up along the way (IE - how do you ensure redundancy? Backups? Scale your network for traffic?). There's a ton of other advantages too. It's the cloud... dude...
Use a SaaS DB service such as Firebase (Note: We are leaving MySQL and SQL database territory with Firebase)
If you plan to let other parties access your MySQL instance to make use of your data, you might also want to consider implementing a REST API (or SOAP API if you hate the future) which acts as an abstraction layer to interact with and provide the data from your database in a consistent and reliable format.
Best answer I can give with the details afforded - look around though the options in this arena are near limitless depending on how and what you're trying to do.
You should be able to access your machine from your LAN pretty easily unless there is some firewall rules preventing opening connection to your machine. Another way is there are many cloud shosting providers has free tier you can signup to bring up a test instance of mysql. Example: Open Shift.
I am trying to implement real time notification system, the approach that i am going to use is by opening a different socket using node's socket.io module, and then record each event of a user, afterwards send data to Mysql and use this data for notifications. So is it a better approach, any suggestions.
Personally I would go for redis+node. Main reason is to lower disk IO. Negative side might be possible data loss with redis (server reboot, service restart etc.). But that also can be configured in redis.
I'm thinking about writing a Javascript based MySQL client.
The client would work like MySQL Query Brwoser, and would connect to a remote MySQL db.
Are there any - client side - Javascript - MySQL communication libraries?
I've found this topic: How to connect to SQL Server database from JavaScript in the browser?
Are there any similar solutions (not using ActiveXObjects)?
Thanks,
krisy
Javascript (at least in a browser) does not provide socket support (hence the use of an ActiveX object in the example you cited). Nor does it have the low-level type conversions that would be required for implementing a client. So even if you were to work out the mysql protocol (see mysqlproxy as well as myqld and the standard client libs).
So unless you want to write your own browser, you'll need to think about some sort of bridge between javascript and mysql.
A further issue is that most people wouldn't want to give direct DML facilities at the client - so even if you're currently connecting across a VPN, then you need to spend a significant amount of time thinking about authentication and session management.
There's some discussion about database abstraction here and in other places.
If it were me I'd be thinking about AJAX/JSON from javascript to the bridge, bridge running somewhere close to the mysql DBMS and implemented in a language with native mysql support (e.g. Perl, PHP) which provides for session support over HTTP.
HTH
What is the most efficient way of implementing queues to be read by another thread/process?
I'm thinking of using a basic MySQL table with polling on sleep. This sounds to be the most scalable (it doesn't even have to be on the same server) but might potentially result in too many queries to the DB.
You have several options, and it really depends on what you are trying to get the system to do.
fork child processes, and interface using connections their stdin/stdout pipes.
create a named pipe on the file system, like /tmp/mysql.sock. This is basically using sockets to communicate cross process.
Setup a message broker. I'd recommend giving ActiveMQ a try, and using the Stomp client for Perl. This is probably your most scalable solution.
This is one of those things that is simple to write yourself to your exact specifications. I wrote a toy one here:
http://github.com/jrockway/app-queue
I am not sure it compiles anymore, as AnyEvent::Subprocess has changed significantly since I wrote it. But you can steal the ideas.
Basically, I think an RPC-style infrastructure is the best. You have a server that handles keeping the data. Then clients connect and add data or remove data via RPC calls. This gives you ultimate flexibility with the semantics. You can be "transactional" so that if a client takes data and then never says "hey, I am done with it", you can assume the client died and give the job to another client. You can also ensure that each job is only run once.
Anyway, making a queue work with a relational database table involves a bit of effort. You should use something KiokuDB for the persistence. (You can physically store the data in MySQL if you desire, but this provides a nicer Perl API to that.)
In PostgreSQL you could use the NOTIFY/LISTEN combination, would need only a wait on the PG connection socket after running LISTEN(s).