Access a SQLite db file from inside MySQL? - mysql

Given the number of different storage engines MySQL supports, I'm just a bit surprised I can't find one that uses SQLite files.
Does such a thing exist?
The use case is that I have data in multiple MySQL data bases that I want to process and export as a SQLite data base that other tools can then process.
The current proposed solution is to use a scratch MySQL DB server to access the other instance using the FEDERATED Storage Engine and to access and populate a local SQLite db file using another Storage Engine.
The constraint is that the cost/benefit trade off can barely justify the proposed workflow (and can't justify writing any code beyond the SQL that reads and processes the federated tables) so I'm strictly limited to "works robustly out of the box" solutions.

Related

Query data from database for 2 different server

I want to query data from 2 different database server using mysql. Is there a way to do that without having to create Federated database as Google Cloud Platform does not support Federated Engine.
Thanks!
In addition to #MontyPython's excellent response, there is a third, albeit a bit cumbersome, way to do this if by any chance you cannot use Federated Engine and you also cannot manage your databases replication.
Use an ETL tool to do the work
Back in the day, I faced a very similar problem: I had to join data from two separate database servers, neither of which I had any administrative access to. I ended up setting up Pentaho's ETL suite of tools to Extract data from both databases, Transform if (basically having Pentaho do a lot of work with both datasets) and Loading it on my very own local database engine where I ended up with exactly the merged and processed data I needed.
Be advised, this IS a lot of work (you have to "teach" your ETL tool what you need and depending on what tool you use, it may involve quite some coding) but once you're done, you can schedule the work to happen automatically at regular intervals so you always have your local processed/merged data readily accesible.
FWIW, I used Pentaho's community edition so free as in beer
You can achieve this in two ways, one you have already mentioned:
1. Use Federated Engine
You can see how it is done here - Join tables from two different server. This is a MySQL specific answer.
2. Set up Multi-source Replication on another server and query that server
You can easily set up Multi-source Replication using Replication channels
Check out their official documentation here - https://dev.mysql.com/doc/refman/8.0/en/replication-multi-source-tutorials.html
If you have an older version of MySQL where Replication channels are not available, you may use one of the many third-party replicators like Tungsten Replicator.
P.S. - There is no such thing in MySQL as a FDW in PostgreSQL. Joins across servers are easily possible in other database management systems but not in MySQL.

Storing in a JSON file vs. JSON object vs. MYSQL database

I've programmed a chat application using nodejs, expressjs and socket.io.
So, When I used MYSQL database the application slows down then I replaced it with storing the data using JSON objects (in the server-side of nodejs).
Now everything is ok and the application is working well, but If I want to release an update of the nodejs' app.js file it should be restarted, so, everything in the JSON objects will be deleted!
How can I fix this problem? and can I store in a JSON file to fix it? and will the application stay at the same speed or what?
Storing data in RAM will be always faster than writing to a database, but the problem in your case you need to persist it somewhere.
There are many solutions to this problem but if you are using JSON's, I recommend to look at mongodb database.
MongoDb supports multiple storage engines, but the one that you are interested is in memory.
An interesting architecture for you can be the following replica set configuration:
Primary with in-memory storage engine
Secondary with in-memory storage engine
An other Secondary with wiredtiger storage engine.
You will have the benefits of speed by storing in RAM, and also it will be persisted in the database.
A simpler possibility will be to use a key-store db like redis. Easier to configure.

Use the mongoDb and MySQL together

I have a application where I need to maintain the audit log operation performed on the collection. I am currently using the MongoDB for storage purpose which work well so far.
Now for audit log I am thinking to use the MySQL database where reasons are-
1. Using the mongo implicit audit filter degrade the performance.
2. Storage will be huge if I store the logs also in the mongoDB which will impact in replication of nodes in cluster.
Conditions to see the logs are not very often in application, so thinking to store logs out of main storage. I am confused to use mongoDB with MySQL, also is this a right choice for future perspective.
Also Is MySQL a good choice to store the audit log, or any other database can help me in storage and conditional query later.
Performance is not guaranteed to go to a completely different database system only for this purpose.
My first attempt for separation would be creating a new database in your current database system and forward to there or even using a normal text file.
Give your feedbacks.

How to use MySql Memory Storage Engine in Hibernate?

Is there any way to tell Hibernate use MySql Memory Storage Engine?
Thanks.
Edit: I found Memory Storage Engine does not support all features of a regular Store Engine like InnoDB, etc. So it may be seemed logical that there is no option for it.
There should be a properties file where you can put your URL to MySQL
#hibernate.dialect org.hibernate.dialect.MySQLDialect
#hibernate.dialect org.hibernate.dialect.MySQLInnoDBDialect
#hibernate.dialect org.hibernate.dialect.MySQLMyISAMDialect
#hibernate.connection.driver_class com.mysql.jdbc.Driver
#hibernate.connection.url jdbc:mysql:///mysqlURL
#hibernate.connection.username
#hibernate.connection.password
But be aware of this
When using the MyISAM storage engine, MySQL uses extremely fast table locking that allows multiple
readers or a single writer. The biggest problem with this storage engine occurs when you have a steady
stream of mixed updates and slow selects on a single table. If this is a problem for certain tables, you can
use another storage engine for them.
The storage engine used by mySQL is declared when you create your tables. Use the qualifier ENGINE=MEMORY at the end of your CREATE TABLE DDL. Then use it like any other table.
But, of course, remember that if your mySQL server bounces for any reason, all rows will be gone from that MEMORY table when it comes back.
Why do you want in-memory storage?
My personal use case scenario for such a setup is testing.
Think about using h2, hsql or derby. AFAIK they all provide in-memory storage. And if you consistently use Hibernate, it should make no difference which database runs in the background -- at least not from a development standpoint.

MySQL Cluster is a NoSQL technology?

MySQL Cluster is a NoSQL technology? Or is another way to use the relational database?
MySQL Cluster uses MySQL Servers as API nodes to provide SQL access/a relational view to the data. The data itself is stored in the data nodes - which are separate processes. The fastest way to access the data is through the C++ API (NDB API) - in fact that is how the MySQL Server gets to the data.
There are a number of NoSQL access methods for getting to the data (that avoid going through the MySQL Server/releational view) including Rest, Java, JPA, LDAP and most recently the Memcached key-value store API.
It is another way to use the database by spreading it across multiple machines and allowing a simplified concurrent-master setup. It comes with a bit of a cost in that your indexes cannot exceed the amount of RAM available to hold them. To you application, it looks no different than regular MySQL.
Perhaps take a look at Can MySQL Cluster handle a terabyte database.