openshift: is it possible to create multiple mysql cartridges? - mysql

Openshift offers scalability. However, it seems to me, if you are using MySql, in the end MySql queries/hits will be the bottleneck (if you are lucky enough to generate traffic which need scalability and considering the max-connections limit on openshift).
Suppose I want to use OpenShift, is it possible to create multiple mysql cartridges to balance the load and create dynamic environmental variables to assign requests to different mySql cartridges? (suppose I send an id or something and the environment variable for the mysql is set to "dbname+lastdigit" of this id").
This is a simplified example which should multiply the database capacity by ten (if this is unrelated data). Can it be done?
I hope some openshift guy or girl will clarify this for me....
cheers
Edit: Thanks mbaird for your info:
To clarify:
I wasn't talking about auto-scaling but using for instance 11 static/persistant db-cartridges which would never scale up or down.
Then you could store user information in any of them depending on (also for instance) their last id-digit.
The 11th database cartride could be used as a table to get the user's id and then redirect that user to the right database (if last digit = 0, db = db0, if last digit = 1, db = db1 etc). This would enable me to call the right database for the right user.
Of course, this is not auto-scaling but it would multiply the database capacity by (roughly) ten.
However, this would require the ability to create multiple mysql cartridges and corresponding environmental variables to gain access to all these mysql cartridges.
It seems to me this not possible right now, so I will investigate your suggestions.

The OpenShift database tier currently doesn't scale. Further, even if you could add a second MySQL cartridge it wouldn't give you a scalable database, it would give you a new, empty database. What you are looking for is the ability to scale the MySQL cartridge across multiple gears, not adding another cartridge.
I've actually seen some comments from OpenShift (although I can't seem to find them now) that the databases on OpenShift are for development only and you should look for another service to host your database if you have a mission-critical application that requires database fail-over and scalability.
Since you are specifically using MySQL, I would look into using Amazon RDS (either MySQL or the new Aurora engine which is MySQL compatible) or ScaleDB.

Related

1gb database with only two records

I identified an issue with an infrastructure I created on the Google Cloud Platform and would like to ask the community for help.
A charge was made that I did not expect, as theoretically it would be almost impossible for me to pass the limits of the free tier. But I noticed that my database is huge, with 1gb and growing, and there are some very heavy buckets.
My database is managed by a Django APP, and accessing the tool's admin panel there are only 2 records in production. There were several backups and things like that, but it wasn't me.
Can anyone give me a guide on how to solve this?
I would assume that you manage the database yourself, i.e. it is not Cloud SQL. Databases pre-allocate files in larger chunks in order to be efficient on writes. You can check this yourself - write additional 100k records, most likely size will not change.
Go to Cloud SQL and it will say how many sql instances you have. If you see the "create instance" window that means that you don't have any Google managed SQL instances and the one we're talking about is a standalone one.
You can also check Deployment Manger to see if you deployed one from the marketplace or it's a standalone VM with MySQL installed manually.
I made a simple experiment and deployed (from Marketplace) a MySQL 5.6 on top of Ubuntu 14.04 LTS. Initial DB size was over 110MB - but there are some records in it initially (schema, permissions etc) so it's not "empty" at all.
It is still weird that your DB is 1GB already. Try to deploy new one (if your use case permits that). Maybe this is some old DB that was used for some purpose and all the content deleted afterwards but some "trash" still remains and takes a lot of space.
Or - it may well be exactly as NeatNerd said - that DB will allocate some space due to performance issues and optimisation.
If I have more details I can give you better explanation.

mysql equivalent to oracle's dblink [duplicate]

Oracle's database link allows user to query on multiple physical databases.
Is there any MySQL equivalent ? Workaround ?
I want to run a join query on two tables , which are in two physical databases. Is it possible in MySQL ?
I can think of four possible workarounds for your scenario:
use fully-qualified-table-names when querying for the external table. MySQL supports the dbname.tablename-syntax to access tables outside the current database scope. This requires that the currently connected user has the appropriate rights to read from the requested table in another physical db.
if your external database is running on a different MySQL server (either on the same machine or via a network connection) you could use replication to constantly update a read-only copy of the remote table. Replication is only possible if you're running two separate MySQL instances.
use the FEDERATED MySQL storage engine to virtually import the table into your current database. This lifts the requirement of giving the current user access rights into the second database as the credentials are given with the CREATE TABLE-statement when using the FEDERATED storage engine. This also works with the databases running on different physical servers or different MySQL instances. I think that this will be the poorest performing option and does have some limitations - more or less important depending on your usage scenario and your requirements.
This is an extension to method 1. Instead of having to specify the fully-qualified-table-names every time you request information from your external table, you simply can create a view inside your current database based on a simple SELECT <<columns>> FROM <<database>>.<<table>>. This resemble the way, the FEDERATED-method works, but is limited to tables on the same MySQL instance.
Personally I'd consider method (4) as the most useful - but the others could also be possible workarounds depending on your requirements.
There's no MySQL equavilent method at the moment, see this post. However as the poster suggest you can do a work-around if the databases are on the same machine, by just adding the database-name in front of the table-name.
Also see this, it's 6 years old, but still not resolved. It's closed and probably not on their todo-list anymore.

MySQL: Location of data when developing on different machines

Having just installed MySQL, which I want to use for research software development, I face the question where should I store my data files?
I have three computers (home, work, laptop), all of which have a development environment (Java/Eclipse) and I want all those machines to be able to access the database(s).
If I just had one machine, it would be a no brainer and I would just use localhost.
I can't decide where best to locate the data files. The options I am considering (but happy to hear other views) are:
1) Just store on the local machine and let Dropbox take care of syncing the data.
The data might get quite large and exceed the storage capacity on at least one of the machines and it might also take a long time to sync?
2) Use a Network Storage device (I have a Synology unit)
3) I have my own domain registered so I could use that?
4) Use a cloud based service.
Not sure how these work, the costs and the backup options.
In all the above, unless I use localhost, I am concerned about access times if I have to go "over the internet", especially if I make heavy use of SQL queries/updates.
I am also worried about backip up the databases in case I need to restore.
You might ask why I want to use mySQL? In the future, I might want to do a PHP roll out and MySQL seems the way to go.
Thans in advance.
G
maybe consider local installs on each machine, but with mysql replication. that way, if your laptop doesnt have internet service, you can still work with the local data, even though it might be a tiny bit out of date.
Replication also kinda addresses backups.
I am not sure size of your application. If it's huge, you could use an independent DATABASE server, otherwise use one of your computers as place to store data (use the one with biggest disk and memory size).
I think you wonder how to visit data in a different computer, actually no need to worry about that, because it always use database connector with IP/port defined in your application. If you use Java/Eclipse, you should use JDBC to visit database.
For example, JDBC setting looks like below.
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/myuser", "root", "root"); // please replace localhost and 3306 with IP and port of your database
String sql = "select * from yourtable"; // your query
Statement st = (Statement)conn.createStatement();
st.executeQuery(sql);

Database access related info on clustered environment

My understanding of database cluster is less because I have not worked on them. I have the below question.
Database cluster has two instance db server 1 & server 2. Each instance will have a copy of databases, considering the database has say Table A.
Normally a query request will be done by only one of the servers which is randomly decided.
Question1: I would like to know given the access can we explicitly tell which server should process the query?
Question2: Given the access, can a particular server say db server 2 be accessed from outside directly to query?
Either in Oracle or MySQL database.
/SR
There are many different ways to implement a cluster. Both MySQL and Oracle provide solutions out of the box - but very different ones. And there's always the option of implementing different clustering on top of the DBMS itself.
It's not possible to answer your question unless you can be specific about what cluster architecture and DBMS you are talking about.
C.
In Oracle RAC (Real Application Clusters), the data-storage (ie the disks on which the data is stored) are shared, so it's not really true to say there is more than one copy of the data... there is only one copy of the data. The two servers just access the storage separately (albeit with some co-operation)
From an Oracle perspective:
cagcowboy is correct; in an Oracle RAC system, there is but one database (the set of files on disk), multiple database instances (executing programs) on different logical or physical servers access those same files.
In Oracle, a query being executed in parallel can perform work using the resources of any member of the cluster.
One can "logically" partition the cluster so that a particular application prefers to connect to Member 1 of the cluster instead of Member 2 through the use of service names. However, if you force an application to always connect to a particular member of the cluster, you have eliminated a primary justification to cluster - high availability. Similarly, if the application connects to a functionally random member of the cluster, different database sessions with read and/or write interest in the same Oracle rows can significantly degrade performance.

Oracle Database Link - MySQL Equivalent?

Oracle's database link allows user to query on multiple physical databases.
Is there any MySQL equivalent ? Workaround ?
I want to run a join query on two tables , which are in two physical databases. Is it possible in MySQL ?
I can think of four possible workarounds for your scenario:
use fully-qualified-table-names when querying for the external table. MySQL supports the dbname.tablename-syntax to access tables outside the current database scope. This requires that the currently connected user has the appropriate rights to read from the requested table in another physical db.
if your external database is running on a different MySQL server (either on the same machine or via a network connection) you could use replication to constantly update a read-only copy of the remote table. Replication is only possible if you're running two separate MySQL instances.
use the FEDERATED MySQL storage engine to virtually import the table into your current database. This lifts the requirement of giving the current user access rights into the second database as the credentials are given with the CREATE TABLE-statement when using the FEDERATED storage engine. This also works with the databases running on different physical servers or different MySQL instances. I think that this will be the poorest performing option and does have some limitations - more or less important depending on your usage scenario and your requirements.
This is an extension to method 1. Instead of having to specify the fully-qualified-table-names every time you request information from your external table, you simply can create a view inside your current database based on a simple SELECT <<columns>> FROM <<database>>.<<table>>. This resemble the way, the FEDERATED-method works, but is limited to tables on the same MySQL instance.
Personally I'd consider method (4) as the most useful - but the others could also be possible workarounds depending on your requirements.
There's no MySQL equavilent method at the moment, see this post. However as the poster suggest you can do a work-around if the databases are on the same machine, by just adding the database-name in front of the table-name.
Also see this, it's 6 years old, but still not resolved. It's closed and probably not on their todo-list anymore.