I have a BigQuery table where I added a new column and am not sure as to how I can append data to its row.
This is the BigQuery table:
This is the csv/excel file: I did try to upload the csv directly as a new table but had errors and am now trying to update the column named 'Max_Takeoff_kg', its the last column in the csv. How do I write a query within BigQuery to update the rows with the data in the csv in the last column.
If you're loading your data only for this time, I'd recommend that you save your XLS as CSV and try to create a new table again.
Anyway, you can update your table using BigQuery DML as you can see here
Its important to remember that in your case, for this approach works correctly you must have a way to identify your rows uniquely.
Example:
UPDATE your_db.your_table
SET your_field = <value>
WHERE <condition_to_identify_row_uniquely>
I hope it helps
Related
I was given an excel (csv) sheet containing a database metadata.
I'm asking if there's a simple way to import the csv and create the tables from there?
Data is not part of this question. the csv looks like this:
logical_table_name, physical_table_name, logical_column_name, physcial_column_name, data_type, data_length
There's about 2000 rows of metadata. I'm hoping I don't have to manually create the tables. Thanks.
I don't know of any direct import or creation. However, if I had to do this and I couldn't find one, I would import the excel file into a staging table (just a direct data import). I'd make add a unique auto ID column to staging table to keep the rows in order.
Then I would use some queries to build table and column creation commands from the raw data. Unless this was something I was setting up to do a lot, I would keep it dead simple, not try and get fancy. Build individual add column commands for each column. Build a create Table command for the first row for each table. Sort them all by the order id, tables before columns. Then you should be able to just copy the script column, check the commands, and go.
I have regular imports going to BigQuery via CSV which work fine.
The CSV file format is:
[1st line] - header = column names which match exactly the column
names in the BigQuery table I am importing to
[rest of the lines] = the data
However, the order of the columns in my CSV has recently changed and when importing to BigQuery - the column names in CSV are not matched to column names in BigQuery table. They get basically imported in the order of the CSV columns which is wrong.
Is there a way to tell BigQuery which column from my CSV goes to which column in BigQuery table?
I am using the official PHP library.
Example: https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/bigquery/api/src/functions/import_from_file.php
CSV import won't match the name of the columns (assuming you are using the first row to name the columns).
The best you could do is import into a different table which matches the column order of the new files, and then run a SELECT that will output the re-ordered columns into the existing table.
If you have control over how the CSV is created, you can also use the BigQuery client to get the current order of columns in the table and then generate the file according to that.
I have to import CSV File for different clients in my system, some with [,] some [|] Etc… separated. Always very big files.
While importing I need to filter duplicate records, duplicate records should not insert in dB.
Problem is columns can be different for different clients. I have database for every client for CSV DATA which I import every day, Week or month depending on client need. I keep data for every import so we can generate reports & what data we receive in CSV file our system do processing after import.
Data structure example:
Client 1 database:
First_Name | Last_Name | Email | Phone | Ect…
95% data always same every new CSV file. Some records new comes & some records they delete from csv. so our system only process those records those newly imported .
Currently what happening we import data every time in new table. We keep table name with time stamp so we can keep track for import. it Is expensive process, it duplicate records and tables.
Im thinking solution and I need your suggestion on it.
Keeping one table every time import when I import CSV file data in table, I’ll alter existing table add new column, column name will be “current date” (byte or Boolean) add true false on import??
My other question is first time when I import CSV file …I need to write script:
While importing CSV data, if table already exists then my date logic will work else if table does not exist it should create table given or provided “client name” as table name. Challenge is columns I don’t know it should create columns from CSV file.
Table already exist some new records came in it should insert else update.
Is it do able in mysql??
although I have to do something for mssql also but right now I need solution for my sql.
Please help me... im not good in MySQL :(
You can certainly do an insert Or update statement when importing each record.
see here :
https://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
I propose you create a script to dynamically create you table if it doesn't
What is the language that you would use to insert your csv?
I created a table in BigQuery In Cloud Application . By mistake I uploaded two csv files in a bigquery Table.
How to delete either one or both csv file from bigquery table?
Thanks
Arvind
Unfortunately, there is currently no way to remove data from a BigQuery table. Your best option is to re-import the data in a new table. (If you no longer have the original CSV, you can export the table and then remove the duplicates before re-importing.)
I already have a table in phpmyadmin that contains users records. Each user has a unique admission number. I now want to add a new column to this table and was wondering how I can import data for this new column using just the admission number and new data.
Is this possible? I have a CSV but can't work out the best way to import the data without overwriting any existing records.
Thanks.
As far as I can see, this is not possible. phpMyAdmin's import features are for whole rows only.
You could write a small PHP script that opens the CSV file using fgetcsv(), walks through every line and creates a UPDATE statement for each record:
UPDATE tablename SET new_column = "new_value" WHERE admission_number = "number"
you can then either output and copy+paste the commands, or execute them directly in the script.
If you want to do it using just CSV, here are the steps you could perform.
In a text editor, make a comma separated list of all the column names for the final table (including your new column). This will be useful for importing the new data.
Add the new column to your table using phpmyadmin
Export current table in csv format and sort by admission number in Excel
In your new data CSV, sort by admission number
Copy the column over from your new data to your exported CSV and save for re-import.
Backup your users table (export to CSV)
Truncate the contents of your table (Operations, Truncate)
Import your updated CSV
Optional / Recommended: When you import CSV into phpmyadmin, use the column names option to specify the columns you are using, separated by commas (no spaces).
Assumptions:
1. You are using Spreadsheet such as Excel / OpenOffice to open your csv files.
Any problems?
Truncate the table again and import the sql backup file.