I've been trying to figure this out for two days now and would really appreciate some help. I've imported data from a csv where one field contained html data encoded in base64.
The idea is to loop over every row and run FROM_BASE64 on it.
How do I structure a query that:
Loops over all lines
Calls FRON_BASE64 for each line
Runs UPDATE (or similar functionality) on that same row and column
Context: I'm running MariaDB (MySQL equivalent).
Thanks for any help!
Typically base64 would be used for binary data. You probably shouldn't store the decoded data in the same column as the base64-coded string. If necessary you should ALTER TABLE to add a new column that is VARBINARY or BLOB type, to hold the binary data.
ALTER TABLE MyTable ADD COLUMN BinaryField BLOB;
You can then fill that column with an UPDATE statement:
UPDATE MyTable SET BinaryField = FROM_BASE64(EncodedField);
Related
I have a BigQuery table where I added a new column and am not sure as to how I can append data to its row.
This is the BigQuery table:
This is the csv/excel file: I did try to upload the csv directly as a new table but had errors and am now trying to update the column named 'Max_Takeoff_kg', its the last column in the csv. How do I write a query within BigQuery to update the rows with the data in the csv in the last column.
If you're loading your data only for this time, I'd recommend that you save your XLS as CSV and try to create a new table again.
Anyway, you can update your table using BigQuery DML as you can see here
Its important to remember that in your case, for this approach works correctly you must have a way to identify your rows uniquely.
Example:
UPDATE your_db.your_table
SET your_field = <value>
WHERE <condition_to_identify_row_uniquely>
I hope it helps
I am bit new to SSIS. I am using SSIS 2012.The input files are excel, csv and txt.
The data has to be dumped from input files to the database. The size of the columns in the input files keep on changing, so i cant stick to a fixed length. Changing the data type of connection managers in the package to ntext would solve this but we have performance constraint too. So customer prefers to truncate the extra data and intimate him than affecting the performance.
Row redirection will give the rows that are truncated. But i want to intimate the customer in each file the columns that are truncated.
Is SSISDB tracks the data that are truncated. If so which table.
I am planning to write the truncated data to a separate files and then use script component to compare the length of each column. Is there a better way.
Write the truncated rows (redirected) to a table. You can add in a Derived Column to add in the name of the source file (if you get the package to hold it in a variable) between the redirecting Input and the Output that writes to a table.
If you truncate the table before each run, a simple ExecuteSQL to get the Count of rows into a variable, and then an email to the customer if Count>0, will work.
Ok, as far as I know, there is nothing in SSIS which automatically tracks which columns are truncated.
There are a couple ways I can think of to handle this and they both require the main logic occurring in a script transformation.
If I had to do this, I would create a script component that "predicts" which columns will be truncated before passing them on to the rest of the dataflow.
As you use LEFT() to truncate each string to the required length, you can check afterwards to see if the old string and the new string are the same.
If not, then you know truncation has occurred and you can populate a variable to use in a send email task.
Or you can truncate the columns with derived columns first and then use a script transformation to compare the old column with the new column. Same logic.
I have matlab data table T in my matlab with more than 40,000 rows. I want to insert this table into MySQL database. This table T has columns with different data types(char, date, integer). I tried following:
fastinsert(conn,'tablename',colnames2,T)
I even tried with "Insert" and datainsert". I converted table to cellarray, but still it didn't work. Then I tried to convert that cellarray into mat, but i couldn't convert it to matrix It says all the content should be of same data type to convert it into matrix.
Is there any way i can insert my data table present in matlab to MySQL database?
Instead of converting your data from a cell array to a matrix have you tried converting it to a table using cell2table() and then using insert(). There is an example in the MATLAB documentation that can be found here.
The linked example uses multiple data types in a cell and then converts them to a table (instead of a matrix) which can then be written to the database with mixed data types.
Suppose we have table with a DECIMAL column with values, for example: 128.98, 283.98, 21.20.
I want to import some CSV Files to this table. However, in the columns of these files, I have values like 235,69, 23,23, with comma instead of points.
I know I can REPLACE that column, but is there some way of doing that before LOAD INFILE?
I do not believe you can simultaneously replace that column and load the data. Looks like you will have to do multiple steps to get the results you want.
Load the data first into a raw table using the LOAD INFILE command. This table can be identical to the main table. You can use the Create Table like command to create the table.
Process the data (i.e. change the comma to a . where applicable) in the raw table.
select the data from the raw table and insert into main table either with row by row processing or bulk insert.
This can all be done in a stored procedure (SP) or by a 3rd party script written in python, php, etc...
If you want to know more about SP's in Mysql, Here is a useful link.
I have a text file to be imported in a MySQL table. The columns of the files are comma delimited. I set up an appropriate table and I used the command:
load data LOCAL INFILE 'myfile.txt' into table mytable FIELDS TERMINATED BY ‘,’;
The problem is, there are several spaces in the text file, before and after the data on each column, and it seems that the spaces are all imported in the tables (and that is not what I want). Is there a way to load the file without the empty spaces (other than processing each row of the text file before importing in MySQL)?
As far as I understand, there's no way to do this during the actual load of the data file dynamically (I've looked, as well).
It seems the best way to handle this is to either use the SET clause with the TRIM
function
("SET column2 = TRIM(column2)")
or run an update on the string columns after loading, using the TRIM() function.
You can also create a stored procedure using prepared statements to run the TRIM function on all columns in a specified table, immediately after loading it.
You would essentially pass in the table name as a variable, and the sp would use the information_schema database to determine which columns to upload.
If you can use .NET, CSVReader is a great option(http://www.codeproject.com/KB/database/CsvReader.aspx). You can read data from a CSV and specify delimiter, trimming options, etc. In your case, you could choose to trim left and right spaces from each value. You can then either save the result to a new text file and import it into the database, or loop through the CsvReader object and insert each row into the database directly. The performance of CsvReader is impressive. Hope this helps.