I'm using DataGrip and need to do multiple queries of the following format:
SELECT * FROM table WHERE id = '01345' AND date = '01-01-2020'
For each query, the id and date are different. I have a CSV file with many rows, each row containing a different id and date. Is there a way to get DataGrip to iterate through the CSV file and execute all required queries, and save each output as a CSV file (all outputs combined as a single CSV file would also suffice)?
There is no one step solution.
But here what I would do:
Import CSV file into a table in a temporary in-memory database, e.g. H2
Write your custom extractor, see examples by #moscas
Additionally, see DataGrip blog posts about export and extractors:
Export data in any way with intellij based ides
Data extractors
What objects functions are available for custom data extractors
Related
Using Azure Data Factory I want to achieve 2 similar things.
1-) Many files (csv or such) in a blob container under different folders, I want to take first line (which is header and in some cases remove multiple starting lines) from each file and concat all left from all files into a single file also in the blob
2-) Many json files (each containing multiple json but all files in the same folder), I also want to convert them to a single csv file (concat all csv version of the json files)
Then we will import that single file into a sql server or synapse table using bulk insert or openrowset or such. Import section we got it working.
How do we concat many files in different directories into one or similarly many json files after converting them to csv concat them.
Few addons
Assume 5 csv files are new, I will hit a sql server database and see if those files are imported already, lets say only 3 is not imported, sql server will return a resultset adding a unique integer fileid and the filename.
In the concat csv the first column is the fileid we get from the database, that column does not exists in the csv, similar concept for the json file, each json file contains multiple records and the fileid will be repeated for the record in the same json file during concat csv file creation
Also in the same blob in the root there are multiple folders, each folder for a certain file type. Within that folder many subfolders (multiple levels) created when new files are added.
When we ran the import process 30 minutes ago, we need a way to detect all new files added to the subfolder structure since the last import
This solution must be fast and efficient and it will be part of our ADF pipeline
I have read access to this database which I want to query. I have a csv file which contains a list of ids. I want to query the database and find all documents with the ids listed in the csv file. The normal way, I believe, I would go about doing this is to import the csv file as a table or as a new column. However, I only have read permissions, and I don't believe I can get my permissions changed. As such, I cannot change any of the tables or add any new tables.
So there is 1000 or so lines in the csv file. Typing in all the ids manually into a query isn't really an option. But I also can't import the csv file. Is there some solution that would allow me to look for and find all relevant entries (entries whose id matches an id in the csv file).
I have tables that are on different mysql instances. I want to export some data as csv from a mysql instance, and perform a left join on a table with the exported csv data. How can I achieve this?
Quite surprisingly that is possible with MySQL, there are several steps that you need to go through.
First create a template table using CSV engine and desired table layout. This is the table into which you will import your CSV file. Use CREATE TABLE yourcsvtable (field1 INT NOT NULL, field2 INT NOT NULL) ENGINE=CSV for example. Please note that NULL values are not supported by CSV engine.
Perform you SELECT to extract the CSV file. E.g. SELECT * FROM anothertable INTO OUTFILE 'temp.csv' FIELDS TERMINATED BY ',';
Copy temp.csv into your target MySQL data directory as yourcsvtable.CSV. Location and exact name of this file depends on your MySQL setup. You cannot perform the SELECT in step 2 directly into this file as it is already open - you need to handle this in your script.
Use FLUSH TABLE yourcsvtable; to reload/import the CSV table.
Now you can execute your query against the CSV file as expected.
Depending on your data you need to ensure that the data is correctly enclosed by quotation marks or escaped - this needs to be taken into account in step 2.
CSV file can be created by MySQL on some another server or by some other application as long as it is well-formed.
If you export it as CSV, it's no longer SQL, it's just plain row data. Suggest you export as SQL, and import into the second database.
I know this is a really basic question but I am struggling on my first import of data from an xml file. I have created the table "Regions" which has just two columns - ID and Name. The xml file contains the same column names.
In order to bulk import the data, I am using the following SQL command:
LOAD XML LOCAL INFILE 'C:\Users\Dell\Desktop\regions.xml'
INTO TABLE Regions (ID, Name)
but I am getting the error #1148 - The used command is not allowed with this MySQL version
Now having researched the internet, to allow this command requires a change in one of the command files but my service provider doesn't allow me access to it. Is there an alternative way to write the SQL code and do exactly the same thing as the code above which is basically just import the data from an xml file?
Many thanks
Since LOAD DATA INFILE isn't enabled for you, it appears you have only one more option and that's to create a set of INSERT statements for each row. If you converted your XML file to CSV using Excel, that's an easy step. Assuming you have a rows of data like this
A | B
-----|-------------------------
1 | Region 1
2 | Region 2
I would create a formula like this in column C
=CONCATENATE("INSERT INTO Regions(ID,Name) VALUES(",A1,",'",B1,"');")
This will result in INSERT INTO Regions(ID,Name) VALUES(1,'Region 1'); for your first row. File this down to the last row of your spreadsheet. Select all the insert statements and copy them into a Query text box inside PHPMyAdmin and you should be able to insert your values.
I've used this method many times when I needed to import data into a database.
I already have a table in phpmyadmin that contains users records. Each user has a unique admission number. I now want to add a new column to this table and was wondering how I can import data for this new column using just the admission number and new data.
Is this possible? I have a CSV but can't work out the best way to import the data without overwriting any existing records.
Thanks.
As far as I can see, this is not possible. phpMyAdmin's import features are for whole rows only.
You could write a small PHP script that opens the CSV file using fgetcsv(), walks through every line and creates a UPDATE statement for each record:
UPDATE tablename SET new_column = "new_value" WHERE admission_number = "number"
you can then either output and copy+paste the commands, or execute them directly in the script.
If you want to do it using just CSV, here are the steps you could perform.
In a text editor, make a comma separated list of all the column names for the final table (including your new column). This will be useful for importing the new data.
Add the new column to your table using phpmyadmin
Export current table in csv format and sort by admission number in Excel
In your new data CSV, sort by admission number
Copy the column over from your new data to your exported CSV and save for re-import.
Backup your users table (export to CSV)
Truncate the contents of your table (Operations, Truncate)
Import your updated CSV
Optional / Recommended: When you import CSV into phpmyadmin, use the column names option to specify the columns you are using, separated by commas (no spaces).
Assumptions:
1. You are using Spreadsheet such as Excel / OpenOffice to open your csv files.
Any problems?
Truncate the table again and import the sql backup file.