How to separately assign data to each connected user inside Socket.IO? - mysql

I am trying to make a card game using Socket.IO, and I am having problems assigning user-specific data (in my case, the cards that each user has).
I'm familiar with JavaScript, but I'm just not sure about whether or not there is a specific feature in Socket.IO for assigning user-specific data, or whether or not I have to store the information in a database / array of sorts.

There are ways to attach data to each socket in socket.io, but it's probably easier to put your data in an associative array, where the keys are the socket id's. Just create the key-value pair upon connection, and make sure you delete the pair on the disconnect event with the delete statement.

Related

How to store socket.id across multiple servers nodejs and socket.io

What is the best way to store user's socket.id across multiple servers? Take for example a simple chat app, if two users on different servers are sending messages to each other, the two servers must store each user's socket id somewhere so they can send the message from one user to another.
Currently I am using a redis hash to store each user's socket id (if they are online) but this doesn't work if a user has two connections (for example they have two tabs of the chat app open). Is the best approach to continue using redis but restructure the data structure in a way that makes it work when a user is connected twice, or would it be better to move to something like mongodb or mysql?
I would also like a way to expire data, for example if a socket id is stored for more than 24h then it should be automatically deleted. I have looked into doing this with redis but it doesn't seem possible to delete only one pair inside a hash. Is expiring data something that can be done in mysql or mongodb?
Did you try socket rooms?
check this link for rooms and namespaces
for example, if a user has multiple connections join them in a room with a unique name(maybe the userId or something)

How to store data of different applications in same local MySQL instance if both applications have multi-DB architecture?

Application 1: Suppose I have a Twitter like application. Hence I need to use multiple databases/schema (suppose one to store user info, suppose one for user logging purpose, etc)
Application 2: Suppose I have a blog with logically separated DBs needed ( suppose one to store user info, suppose one for user logging purpose, etc ).
How to use same MySQL instance as the datastore for both. I mean, since each has multiple similar DBs , there are chances of getting confused with names of databases or tables unless I keep long names like twitter_users and blog_users.
Any effective solution within MySQL?
a other way is to use MaxScale as DB Proxy. There is rewrite Engine. There you can configure a rewrite for schema name for one application. The benefit is that you can use a single MySQL/MariaDB instance and configure the hole memory for it.

Why does the Couchbase Server API require a name for new documents

When you create a document using the Couchbase Server API, one of the arguments is a document name. What is this used for and why is it needed?
When using Couchbase Lite you can create an empty document and it is assigned an _id and _rev. You do not need to give it a name. So what is this argument for in Couchbase Server?
In Couchbase Server it is a design decision that all objects are identified by a the object ID, key or name (all the same thing by different names) and those are not auto-assigned. The reason for this is that keys are not embedded in the document itself, key lookups are the fastest way to get that object and the technology dictates this under the hood of the server. Getting a document by ID is much faster than querying for it. Querying means you are asking a question, whereas getting the object by ID means you already know the answer and are just telling the DB to go get it for you and is therefor faster.
If the ID is something that is random, then more than likely you must query the DB and that is less efficient. Couchbase Mobile's sync_gateway together with Couchbase Lite handles this on your behalf if you want it to as it can have its own keyspace and key pattern it manages for key lookups. If you are going straight to the DB on your own with the Couchbase SDK though, knowing that key will be the fastest way to get that object. Like I said, Couchbase Sync_Gateway handeles this lookup for you, as it is the app server. When you go direct with the SDKs you get more control and different design patterns emerge.
Many people in Couchbase Server create a key pattern that means something to their application. As an example for a user profile store I might consider breaking up the profile into three separate documents with a unique username (in this example hernandez94) for each document:
1) login-data::hernandez94 is the object that has the encrypted password since I need to query that all of the time and want it in Couchbase's managed cache for performance reasons.
2) sec-questions::hernandez94 is the object that has the user's 3 security questions and since I do not use that very often, do not care if it is in the managed cache
3) main::hernandez94 is the user's main document that has everything else that I might need to query often, but not nearly as often as other times.
This way I have tailored my keyspace naming to my application's access patterns and therefor get only the data I need and exactly when I need it for best performance. If I want, since these key names are standardized in my app, I could do a paralellized bulk get on all three of these document since my app can construct the name and it would be VERY fast. Again, I am not querying for the data, I have the keys, just go get them. I could normalize this keyspace naming further depending on the access patterns of my application. email-addresses::hernandez94, phones::hernandez94, appl-settings::hernandez94, etc.

Multi-tenant Django applications: altering database connection per request?

I'm looking for working code and ideas from others who have tried to build a multi-tenant Django application using database-level isolation.
Update/Solution: I ended solving this in a new opensource project: see django-db-multitenant
Goal
My goal is to multiplex requests as they come in to a single app server (WSGI frontend like gunicorn), based on the request hostname or request path (for instance, foo.example.com/ sets the Django connection to use database foo, and bar.example.com/ uses database bar).
Precedent
I'm aware of a few existing solutions for multi tenancy in Django:
django-tenant-schemas: This is very close to what I want: you install its middleware at highest precedence, and it sends a SET search_path command to the db. Unfortunately, it is Postgres specific and I am stuck with MySQL.
django-simple-multitenant: The strategy here is to add a "tenant" foreign key to all models, and adjust all application business logic to key off of that. Basically each row is becomes indexed by (id, tenant_id) rather than (id). I've tried, and don't like, this approach for a number of reasons: it makes the application more complex, it can lead to hard-to-find bugs, and it provides no database-level isolation.
One {app server, django settings file with appropriate db} per tenant. Aka poor man's multi tenancy (actually rich man's, given the resources it involves). I do not want to spin up a new app server per tenant, and for scalability I want any app server to be able to dispatch requests for any client.
Ideas
My best idea so far is to do something like django-tenant-schemas: in the first middleware, grab django.db.connection and fiddle with the database selection rather than the schema. I haven't quite thought through what this means in terms of pooled/persistent connections
Another dead end I pursued was tenant-specific table prefixes: Setting aside that I'd need them to be dynamic, even a global table prefix is not easily achieved in Django (see rejected ticket 5000, among others).
Finally, Django multiple database support lets you define multiple named databases, and mux among them based on the instance type and read/write mode. Not helpful since there is no facility to select the db on a per-request basis.
Question
Has anyone managed something similar? If so, how did you implement it?
I've done something similar that is closest to point 1, but instead of using middleware to set a default connection Django database routers are used. This allow application logic to use a number of databases if required for each request. It's up to the application logic to choose a suitable database for every query, and this is the big downside of this approach.
With this setup, all databases are listed in settings.DATABASES, including databases which may be shared among customers. Each model that is customer specific is placed in a Django app that has a specific app label.
eg. The following class defines a model which exists in all customer databases.
class MyModel(Model):
....
class Meta:
app_label = 'customer_records'
managed = False
A database router is placed in the settings.DATABASE_ROUTERS chain to route database request by app_label, something like this (not a full example):
class AppLabelRouter(object):
def get_customer_db(self, model):
# Route models belonging to 'myapp' to the 'shared_db' database, irrespective
# of customer.
if model._meta.app_label == 'myapp':
return 'shared_db'
if model._meta.app_label == 'customer_records':
customer_db = thread_local_data.current_customer_db()
if customer_db is not None:
return customer_db
raise Exception("No customer database selected")
return None
def db_for_read(self, model, **hints):
return self.get_customer_db(model, **hints)
def db_for_write(self, model, **hints):
return self.get_customer_db(model, **hints)
The special part about this router is the thread_local_data.current_customer_db() call. Before the router is exercised, the caller/application must have set up the current customer db in thread_local_data. A Python context manager can be used for this purpose to push/pop a current customer database.
With all of this configured, the application code then looks something like this, where UseCustomerDatabase is a context manager to push/pop a current customer database name into thread_local_data so that thread_local_data.current_customer_db() will return the correct database name when the router is eventually hit:
class MyView(DetailView):
def get_object(self):
db_name = determine_customer_db_to_use(self.request)
with UseCustomerDatabase(db_name):
return MyModel.object.get(pk=1)
This is quite a complex setup already. It works, but I'll try to summarize what I see see as advantages and disadvantages:
Advantages
Database selection is flexible. It allows multiple database to be used in a single query, both customer specific and shared databases can be used in a request.
Database selection is explicit (not sure if this is an advantage or disadvantage). If you try to run a query that hits a customer database but the application hasn't selected one, an exception will occur indicating a programming error.
Using a database router allows different databases to exist on different hosts, rather than relying on a USE db; statement that guesses that all databases are accessible through a single connection.
Disadvantages
It's complex to setup, and there are quite a few layers involved to get it functioning.
The need and use of thread local data is obscure.
Views are littered with database selection code. This could be abstracted using class based views to automatically choose a database based on request parameters in the same manner as middleware would choose a default database.
The context manager to choose a database must be wrapped around a queryset in such a manner that the context manager is still active when the query is evaluated.
Suggestions
If you want flexible database access, I'd suggest to use Django's database routers. Use Middleware or a view Mixin which automatically sets up a default database to use for the connection based on request parameters. You might have to resort to thread local data to store the default database to use so that when the router is hit, it knows which database to route to. This allows Django to use its existing persistent connections to a database (which may reside on different hosts if wanted), and chooses the database to use based on routing set up in the request.
This approach also has the advantage that the database for a query can be overridden if needed by using the QuerySet using() function to select a database other than the default.
For the record, I chose to implement a variation of my first idea: issue a USE <dbname> in an early request middleware. I also set the CACHE prefix the same way.
I'm using it on a small production site, looking up the tenant name from a Redis database based on the request host. So far, I'm quite happy with the results.
I've turned it into a (hopefully resuable) github project here: https://github.com/mik3y/django-db-multitenant
You could create a simple middleware of your own that determined the database name from your sub-domain or whatever and then executed a USE statement on the database cursor for each request. Looking at the django-tenants-schema code, that is essentially what it is doing. It is sub-classing psycopg2 and issuing the postgres equivalent to USE, "set search_path XXX". You could create a model to manage and create your tenants too, but then you would be re-writing much of django-tenants-schema.
There should be no performance or resource penalty in MySQL to switching the schema (db name). It is just setting a session parameter for the connection.

Multiple database connections in Rails

I'm writing a simpler version of phpMyAdmin in Rails; this web app will run on a web server (where users will be able to indicate the database name, hostname, username, password, and port number of one of the database servers running on the same network). The user will then be connected to that machine and will be able to use the UI to administer that database (add or remove columns, drop tables, etc).
I have two related questions (your help will greatly aid me in understanding how to best approach this):
In a traditional Rails application I would store the database info in database.yml, however here I need to do it dynamically. Is there a good way to leave the database.yml file empty and tell Rails to use the connection data provided by the user at run time instead?
Different users may connect to different databases (or even hosts). I assume that I need to keep track of the association between an established database connection and a user session. What's the best way to achieve this?
Thank you in advance.
To prevent Rails from initializing ActiveRecord using database.yml, you can simply remove :active_record from config.frameworks in config/environment.rb. Then, to manually establish connections, you use ActiveRecord::Base.establish_connection. (And maybe ActiveRecord::Base.configurations)
ActiveRecord stores everything connection related in class variables. So if you want to dynamically create multiple connections, you also have to dynamically subclass ActiveRecord::Base and call establish_connection on that.
This will be your abstract base class for any subclass you'll use to actually manage tables. To make ActiveRecord aware of this, you should do self.abstract_class = true within the base class definition.
Then, each table you want to manage will in turn dynamically subclass this new abstract base class.
This is more difficult, because you can't really persist connections, of course. The immediate solution I can think of is storing a unique token in the session, and use that in a before_filter to get back to the dynamic ActiveRecord::Base subclass, which you'll probably be storing in a hash somewhere.
This gets more interesting once you start running multiple Rails worker processes:
You will have to store all of the database connection information in the session, so other workers can use it.
You probably want a consistent unique token across workers, so use a hash function on a combination of database connection parameters.
Because a worker may be called with a token it doesn't yet know about, your subclassing and establish_connection logic will probably happen in the before_filter. (Rather than the moment of login, for example.)
You will have to figure out some clever way of garbage collecting connections and classes, for when user doesn't properly log out and the session expires. (Sorry, I don't know this one.)