External high performing Tools to load MYSQL data [closed] - mysql

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have almost a TB of data to be loaded to MYSQL database regularly once in a week.
The server used is of lower configuration and takes a long time for every upload.
Can anyone please suggest me a tool or efficient technology to handle this.

I personally found that LOAD DATA INFILE works best for me. Check it out: http://dev.mysql.com/doc/refman/5.1/en/load-data.html.
But as #duffymo said, if your server simply can't handle this, it doesn't matter how you upload data, it might not be physically possible to go faster (disks can write only this much data per second).

It's not a matter of efficiency. No software will fix this. Your problem is server and network.
1TB per week? In a single instance of MySQL on an under-powered server? With no sharding or replication? I sincerely doubt that.
But if you must continue, maybe you should look into Hadoop. Keep your data in the Hadoop file system. You won't have to move it anywhere. Use Hive for SQL and let map-reduce help with the processing.

Related

Make portable Mysql and NodeJS [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm needing to make a stand-alone application with NodeJS for Windows, Mac, and Linux
My first option for the database was SQLite but it's very small for my big data
My reason for using MySQL is that support many data and it's quickly
But the big problem is installing MySQL that it's hard to install it with end-user
And the important note is I packaging the NodeJS project and convert to exe file
Also, I use mosquito broker in this project and still no problem in the run this application.
Can I use MySQL like SQLite (stand-alone)?
Thanks
It's bad if you think Sqlite is weak
Because it's the best choice for your needing
It's simple, high performance, stand-alone and many features
My suggestion is using SQLite
Finally, I decide to use the same Sqlite because :
"SQLite supports databases up to 140 terabytes in size"
Also, I assign database file for each device and I think that is the best solution without Mysql database

How to calculate/deal with big amounts of data? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a table in MySQL that has about 50 million records (continuing growing), and it is about subscription consumptions.
So, everyday I have to select these records and make calculations on it in order to target different kind of consumptions/clients, for example if a client is active/inactive, how long has been active, if it had changed product, and so on.
At the moment, I have different queries to select the different business cases and then I load data to the staging area and data warehouse. Although, some of these queries are very low and they are overloading productive environment.
I would like to know if there is a known solution(s) or technology to this kind of daily tasks.
I am open to continue with MySQl or try a new big data technology. For example, selecting everyday the millions of raw records to a staging area/ODS and then work on them with some technology.
Does anybody know good solutions for these kind of tasks?
Thank you.
One option might be replication - http://dev.mysql.com/doc/refman/8.0/en/replication.html
That way you can run whatever queries you want on the replicated DB without impacting the live DB.

How to automate update process between 2 databases [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm quite new to mysql and in database management in general....
I've to solve this scenario:
In the development stage the web site has the database in the local machine and some tables are dedicated to contain information data used by the application ,during the development the records of that tables grow and when we move to production we want to update the production server with the new data...
Can someone advise the best practice to automate the update process from the local to the production database.
Thanks in advance
The road to doing this successfully is to have each database know how far it has migrated.
You should absolute use something like Liquibase or Flyway to do it. If you have a simple database environment these two will work. Both of these will track changes in version files that the database keep track of.
If you need more complexity, like in a sharded environment, you probably need to roll your own tool for this.
You should mention different .sql files for each environment like,
development.sql,
staging.sql,
production.sql
And you need to write shellscript to execute this script while deployment process.
Also, you need to maintain one constant to get current environment.

Is reading/writing to mysql database periodically CPU intensive? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I will be writing a program in Delphi that will be reading and writing to a MySQL database tables on a regular basis like every 5 seconds. Is this going to be CPU intensive? or get to a point where computer will freeze completely? I know reading and writing to and from a hardrive nonstop can freeze everything on your computer. I am not really sure about MySQL database.
Databases are designed to handle many transactions frequently, but it really depends on what the queries you are using. A simple SELECT on a couple rows is unlikely to cause an issue, but large scale updates targeting many tables or multiple joins can slow performance. It all depends on what your queries are.
This all depends on the computer and the complexity of the query.
As David has said, it really does depend on the hardware and queries you are processing.
I would suggest measuring the processing time of each query to determine whether the writing processes will be stacking over the other 5 second interval queries.
You can find information on how to measure your MySQL processes here.

What storage system to use for a real time messaging? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am developing a Real Time messaging application (such as WhatsApp and co) and I am facing a big question.
The application itself is not as complicated as what exists on the market. However, I am no sure what storage system I should use. I have several ideas but I don't know which one is better that the others:
A simple mysql database with relations between messages/conversations/conversations
A mongodb with replicate of each conversations for all users in the conversations
A redis store with replicate conversations for all users in the conversations.
I don't know which one is better for what I want to do. If you have some advise so I can choose the right solution. (or if there is a solution I haven't listed which is even better :) )
Note : My API is developped in Ruby On Rails (if this can help make a decision)
Data volume and number of read/writes should be the key factor leading you to the decision. If the data volume and number of read/write is not going to be huge you can do with mysql. I believe few TB of data with few hundreds of read/writes per minute is SQL database territory. Beyond that it is NoSQL world. However, you should be ready to deal with increased complexity of non-SQL data store design, query implementation, and achieving eventual consistency if you choose NoSQL solution. All the best!