Google BigQuery - Import CSV - How to match columns? - csv

I have regular imports going to BigQuery via CSV which work fine.
The CSV file format is:
[1st line] - header = column names which match exactly the column
names in the BigQuery table I am importing to
[rest of the lines] = the data
However, the order of the columns in my CSV has recently changed and when importing to BigQuery - the column names in CSV are not matched to column names in BigQuery table. They get basically imported in the order of the CSV columns which is wrong.
Is there a way to tell BigQuery which column from my CSV goes to which column in BigQuery table?
I am using the official PHP library.
Example: https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/bigquery/api/src/functions/import_from_file.php

CSV import won't match the name of the columns (assuming you are using the first row to name the columns).
The best you could do is import into a different table which matches the column order of the new files, and then run a SELECT that will output the re-ordered columns into the existing table.

If you have control over how the CSV is created, you can also use the BigQuery client to get the current order of columns in the table and then generate the file according to that.

Related

Can DataGrip iterate through CSV files?

I'm using DataGrip and need to do multiple queries of the following format:
SELECT * FROM table WHERE id = '01345' AND date = '01-01-2020'
For each query, the id and date are different. I have a CSV file with many rows, each row containing a different id and date. Is there a way to get DataGrip to iterate through the CSV file and execute all required queries, and save each output as a CSV file (all outputs combined as a single CSV file would also suffice)?
There is no one step solution.
But here what I would do:
Import CSV file into a table in a temporary in-memory database, e.g. H2
Write your custom extractor, see examples by #moscas
Additionally, see DataGrip blog posts about export and extractors:
Export data in any way with intellij based ides
Data extractors
What objects functions are available for custom data extractors

BigQuery append data from csv to column

I have a BigQuery table where I added a new column and am not sure as to how I can append data to its row.
This is the BigQuery table:
This is the csv/excel file: I did try to upload the csv directly as a new table but had errors and am now trying to update the column named 'Max_Takeoff_kg', its the last column in the csv. How do I write a query within BigQuery to update the rows with the data in the csv in the last column.
If you're loading your data only for this time, I'd recommend that you save your XLS as CSV and try to create a new table again.
Anyway, you can update your table using BigQuery DML as you can see here
Its important to remember that in your case, for this approach works correctly you must have a way to identify your rows uniquely.
Example:
UPDATE your_db.your_table
SET your_field = <value>
WHERE <condition_to_identify_row_uniquely>
I hope it helps

How to COPY CSV exports into a Redshift table that had a new column added?

We have many CSV files in S3, but one of the tables had a new column added, so when importing those CSV files, we get an error "Delimiter not found. The new column is nullable and added to the end of the table so I'm hoping there's a way to import the old MySQL exports with NULL for the new table column.
Is there a way to do this without editing all the export files to add that column?
You can map columns in a COPY command:
http://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-column-mapping.html
You can specify the columns that exist in your file in the COPY command (in the same order as they come in your CSV file). With the EMPTYASNULL parameters, columns with no data will get a NULL value.
COPY table_name (column_a, column_b)
FROM 's3://xxx'
CSV
[...]
EMPTYASNULL
;

MS Access 2007: How can a csv file and its csv data be imported to a mdb table which has only column headings?

Where the column headings are in the db and matched up with the file's csv columns.
I get 2 error message popups at the end of the importer wizard:
1) squash the existing table 'table_name' and a yes/no selection. I select 'yes', then
2) "impossible to replace the table 'table_name' "
The CSV file matches with the table in column number, it only contains the data to fill the rows, it does not contain the headings.
Problem solved using excel.
Sorry, I'll try to think about what I'm really looking to do before posting next time - I really just wanted to match up the data from the CSV file to the headings from the Access table. I was able to import both into excel to match them up, thanks to those who tried to help.

Phpmyadmin - import new column to existing records

I already have a table in phpmyadmin that contains users records. Each user has a unique admission number. I now want to add a new column to this table and was wondering how I can import data for this new column using just the admission number and new data.
Is this possible? I have a CSV but can't work out the best way to import the data without overwriting any existing records.
Thanks.
As far as I can see, this is not possible. phpMyAdmin's import features are for whole rows only.
You could write a small PHP script that opens the CSV file using fgetcsv(), walks through every line and creates a UPDATE statement for each record:
UPDATE tablename SET new_column = "new_value" WHERE admission_number = "number"
you can then either output and copy+paste the commands, or execute them directly in the script.
If you want to do it using just CSV, here are the steps you could perform.
In a text editor, make a comma separated list of all the column names for the final table (including your new column). This will be useful for importing the new data.
Add the new column to your table using phpmyadmin
Export current table in csv format and sort by admission number in Excel
In your new data CSV, sort by admission number
Copy the column over from your new data to your exported CSV and save for re-import.
Backup your users table (export to CSV)
Truncate the contents of your table (Operations, Truncate)
Import your updated CSV
Optional / Recommended: When you import CSV into phpmyadmin, use the column names option to specify the columns you are using, separated by commas (no spaces).
Assumptions:
1. You are using Spreadsheet such as Excel / OpenOffice to open your csv files.
Any problems?
Truncate the table again and import the sql backup file.