MySQL query calling other SQL files to execute in order - mysql

so I have a Java EE application and I use hibernate.
I have created import.sql file which is loaded each time i start the application.
My issue is that the database is quite big, so I have the startup data prepared in separate sql files, which I should load in certain order.
So within this SQL script file i need to CALL or IMPORT or LOAD other SQL files in folder above this one (path is not a problem).
I would be grateful for the solution for mySQL and maybe oracle db as well (but mysql is more important atm).
This solution is not working
Thanks!

Ok so the thing is that for the moment it is not possible to load other QUERY SCRIPT files in one another. It could be raw data with LOAD DATA INFILE, but otherwise it is not possible.

Related

Is there any way to import database directly without using seeding and migration in laravel

i have a import database file called database.sql and i placed that file into my laravel application "database/xyz" directory. i want when my application run first time, all data inserted to connected database.
Thanks
You can always run a custom statement in your Laravel application with
DB::statement(" <query here> ");
If your query (or queries, database dumps usually are just a series of queries) lives in a .sql file, there's nothing wrong with reading it from the file. Something like this
DB::statement( file_get_contents('/path/to/your/database.sql') );
Altough your titles says you want to avoid migrations (I assume you basically just want to avoid the schema builder), I still recommend putting this in a migration file, since they exist for exactly that purpose. Also Laravels artisan will make sure that a migration is only run once, so you don't have to handle that yourself.
A simple command will do the job:
php artisan migrate

Import a database to DataGrip(0xDBE)

How do I import a database just like in phpmyadmin at DataGrip?
I have the .sql exported from phpmyadmin... but those are lots of lines so that the IDE stops working when trying to run the whole .sql
In DataGrip go to File > Open and select your mysql dump file. Then right click the tab for the file to get the context menu, and select "Run [your filename...]" option. It may ask you to select your schema to apply the run to. But this is how I accomplished importing a dump from phpMyadmin using DataGrip.
Jetbrains documentation on running SQL scripts does not provide a ton of information on processing large insert statements. There is a discussion in the Datagrip community forums and apparently upcoming features to make working with large scripts easier.
Quote from thread:
Huge SQL files can be executed from Files view (use a context menu action).
I assume you are attempting to import a database export which is a series of SQL statements saved to a file. There could be a memory issue if you are attempting to run a large SQL file in memory. Try the following.
Insert commit statements in your SQL file in a text editor. This can even be done from within datagrip. Every couple of hundred statements you can place the line
commit;
which should purge the previous statements from memory. I strongly recommend saving the file which you edit separately from the export script. This method is not applicable if you need an all or nothing import, meaning if even one statement or block fails you want all of the statement to be rolled back.
1 - Going to View->Tool Windows->Files
2 - Going to schema folder and open it in windows explorer after that past your dump file in my example i will past MyDump.dmp .
3 - Right click on the MyDump.dmp and run it .
To import data from a script file, run the file as it is described in Run database code. In addition to script files, you can import a CSV, TSV, or any other text file that contains delimiter-separated values.
https://www.jetbrains.com/help/datagrip/import-data.html

Grails with CSV (No DB)

I have been building a grails application for quite a while with dummy data using MySQL server, this was eventually supposed to be connected to Greenplum DB (postgresql cluster).
But this is not feasible anymore due to firewall issues.
We were contemplating connecting grails to a CSV file on a shared drive( which is constantly updated by greenplum DB, data is appended hourly only)
These CSV files are fairly large(3mb, 30mb and 60mb) The last file has 550,000+ rows.
Quick questions:
Is this even feasible? Can CSV be treated as a database and can grails directly access this CSV file and run queries on it, similar to that of a DB?
Assuming this is feasible, how much rework will be required in the grails codes in Datasource, controller and index ( Currently, we are connected to Mysql and we filter data in controller and index using sql queries and ajax calls using remotefunction)
Will the constant reading( csv -> grails ) and writing (greenplum -> csv) render the csv file corrupt or bring up any more problems?
I know this is not a very robust method, but I really need to understand the feasibility of this idea. Can grails function wihtout any DB and merely a CSV file on a shared drive accesssible to multiple users?
The short answer is, No. This won't be a good solution.
No.
It would be nearly impossible, if at all possible to rework this.
Concurrent access to a file like that in any environment is a recipe for disaster.
Grails is not suitable for a solution like this.
update:
Have you considered using the built in H2 database which can be packaged with the Grails application itself? This way you can distribute the database engine along with your Grails application within the WAR. You could even have it populate it's database from the CSV you mention the first time it runs, or periodically. Depending on your requirements.

Explore database contents from .sql file

I inherited the maintenance of a small web forum. Near as I can tell, it is powered by a MySQL database on the backend (the frontend is all PHP).
I need to extract some of the data (which also involves searching for the data I need to extract), but I don't want to touch the production database. I exported a database backup, which produced a several-hundred-megabyte .sql file.
What's the best way to mine these data? I can see several options:
grep through the .sql script in text mode, trying to extract the relevant data
Load it up in sqlite3 (I tried doing this, but it barfed on some of the statements in the script and didn't produce any tables. I have no database experience whatsoever though, so I haven't ruled it out as a dead end just yet).
Install MySQL on my home box, create a database, and execute the .sql script to recreate the data. Then just attach some database explorer tool.
Find some (Linux) app which can understand the .sql file natively (seems unlikely after a bit of Googling).
Any pointers to which of these options (or one I haven't thought of yet) would be the most productive?
I would say any option might work but for data mining, you definitely want to load it up in a new database so you can start query-ing the data and building reports on the data. I would load it up on your Home box. No need to have it remote.

Can I use a CSV file like a (MSsql or mysql or BDE database) in C++ Builder?

I apologise in advance if this question isn't very specific.
Would it be possible to do the following.
when the application loads
read the contents of a CSV file into a dataset.
while the application is running
operate on that dataset exactly as if it were a mysql or mssql or bde database (run queries. insert records. delete records. alter records.)
when the application closes - write the dataset back to the csv file.
You could load the file into a TClientDataset, operate on the dataset and apply the changes back to a file.