importing a csv file into mysql as BLOB - mysql

I'm working on test scripts and I want to load the results.csv file into a database as a BLOB.
Basically I want my table to look like:
serial_number | results |.
So one row for each device. My code would look like this for a device with serial number A123456789 (changed names and path for simplicity). The table is called test.
create table test (serial_number varchar(20), results longblob);
insert into test values ('A123456789',load_file('C:/results.csv'));
When I do this, however, the second column which should contain a BLOB, comes out containing NULL, with no exceptions raised.
If I open my results.csv file in notepad, then save as .txt file with no changes whatsoever, I get exactly what I want when I run the same code substituting ".csv" with ".txt" in the path. Basically it would also solve my problem if I could load the csv file as a text file.
thanks for anything you may be able to contribute.

Related

Values not inserted into table - Bulk insert from csv to ms-access

I am bulk inserting values from a csv file into my access table. Things were working fine,till today I encountered this problem where access inserts all the values expect for one field called BN1. It simply leaves this column balnk when the data is non numeric. This is the batch name of products and in the design the field type is memo (legacy .mdb file so cant change it).
My sample data:
DATE,TIME,PN1,BN1,CH0,CH1,CH2
2019-02-18,16:40:05,test,prompt,0,294,0
2019-02-18,16:40:14,test,1,700,294,0
So in the above data the first row is inserted with a blank value for prompt where as the 2nd row is inserted properly with BN1 as 1.
My code to insert the data:
INSERT INTO Log_143_temp ([DATE],[TIME],PN1,BN1,CH0,CH1,CH2
) SELECT [DATE],[TIME],PN1,BN1,CH0,CH1,CH2
FROM [Text;FMT=Delimited;DATABASE=C:\tmp].[SAMPLE_1.csv]
The path and the file names are correct else it wouldn't have inserted any value
Hi here is how I solved the issue,
Changed the query to bulk insert to,
INSERT INTO Log_143_temp ([DATE],[TIME],PN1,BN1,CH0,CH1,CH2
) SELECT [DATE],[TIME],PN1,BN1,CH0,CH1,CH2
FROM [Text;FMT=CSVDelimited;HDR=Yes;DATABASE=C:\tmp].[SAMPLE_1.csv]
Then add a file named schema.ini in the folder containing the csv file to be imported.
Contents of the schema.ini file,
[SAMPLE_1.csv]
ColNameHeader=True
Format=CSVDelimited
DateTimeFormat=yyyy-mm-dd
Col1="DATE" Text
Col2="TIME" Text
Col3="PN1" Text
Col4="BN1" Text
Col5="CH0" Double
Col6="CH1" Double
Col7="CH2" Double
Now the csv files get imported without any issue.
For additional info on schema.ini visit the following link,
https://learn.microsoft.com/en-us/sql/odbc/microsoft/schema-ini-file-text-file-driver?view=sql-server-2017

mysql using the command line client- Image Loading

I am trying to load an image into mysql using the command line client and following is the the code that i have been using;
INSERT INTO AutomobileParts (Part_ID, Part_Name,Img_Path)
VALUES (
"101AA",
"BikePanel",
load_file("F:/PYQT Projects/bikepanel.jpg")
) WHERE i=1;
can someone please help me out in understanding as in where i am going wrong on entering this piece of code.
MySQL LOAD_FILE() reads the file and returns the file contents as a string.
That will probably not work with an image I guess.
Your field name "Img_Path" indicates that you only save the path in that field, not the image, and that the image itself is on your file system # F:/PYQT Projects/bikepanel.jpg. This would be the normal way of saving images: The image file itself is in some folder on your server and you save the path to that folder in your table.
So if you want to show the image you would select the saved path from your MySQL table and read the file with your respective programming language (e.g. PHP or whatever you are using).
This might do the job unless there is an issue with variable "i" below:
INSERT INTO AutomobileParts (Part_ID, Part_Name,Img_Path)
VALUES (
"101AA",
"BikePanel",
"F:/PYQT Projects/bikepanel.jpg"
) WHERE i=1;

Cassandra COPY command never stops while loads .csv file

Hello and thank you for take your time reading my issue.
I have the next issue with Cassandra cqlsh:
When I use the COPY command to load a .csv into my table, the command prompt never finishes the executing and loads nothing into the table if I stop it with ctrl+c.
Im using .csv's files from: https://www.kaggle.com/daveianhickey/2000-16-traffic-flow-england-scotland-wales
specifically from ukTrafficAADF.csv.
I put the code below:
CREATE TABLE first_query ( AADFYear int, RoadCategory text,
LightGoodsVehicles text, PRIMARY KEY(AADFYear, RoadCategory);
Im trying it:
COPY first_query (AADFYear, RoadCategory, LightGoodsVehicles) FROM '..\ukTrafficAADF.csv' WITH DELIMITER=',' AND HEADER=TRUE;
This give me the error below repeatedly:
Failed to import 5000 rows: ParseError - Invalid row length 29 should be 3, given up without retries
And never finishes.
Add that the .csv file have more columns that I need, and trying the previous COPY command with the SKIPCOLS reserved word including the unused columns does the same.
Thanks in advance.
In cqlsh COPY command, All column in the csv must be present in the table schema.
In your case your csv ukTrafficAADF has 29 column but in the table first_query has only 3 column that's why it's throwing parse error.
So in some way you have to remove all the unused column from the csv then you can load it into cassandra table with cqlsh copy command

Redshift COPY - No Errors, 0 Record(s) Loaded Successfully

I'm attempting to COPY a CSV file to Redshift from an S3 bucket. When I execute the command, I don't get any error messages, however the load doesn't work.
Command:
COPY temp FROM 's3://<bucket-redacted>/<object-redacted>.csv'
CREDENTIALS 'aws_access_key_id=<redacted>;aws_secret_access_key=<redacted>'
DELIMITER ',' IGNOREHEADER 1;
Response:
Load into table 'temp' completed, 0 record(s) loaded successfully.
I attempted to isolate the issue via the system tables, but there is no indication there are issues.
Table Definition:
CREATE TABLE temp ("id" BIGINT);
CSV Data:
id
123,
The line endings in your csv file probably don't have a unix new line character at the end, so the COPY command probably sees your file as:
id123,
Given you have the IGNOREHEADER option enabled, and the line endings in the file aren't what COPY is expecting (my assumption based on past experience), the file contents get treated as one line, and then skipped.
I had this occur for some files created from a Windows environment.
I guess one thing to remember is that CSV is not a standard, more a convention, and different products/vendors have different implementations for csv file creation.
I repeated your instructions, and it worked just fine:
First, the CREATE TABLE
Then, the LOAD (from my own text file containing just the two lines you show)
This resulted in:
Code: 0 SQL State: 00000 --- Load into table 'temp' completed, 1 record(s) loaded successfully.
So, there's nothing obviously wrong with your commands.
At first, I thought that the comma at the end of your data line could cause Amazon Redshift to think that there is an additional column of data that it can't map to your table, but it worked fine for me. Nonetheless, you might try removing the comma, or create an additional column to store this 'empty' value.

Junk characters at the beginning of file obtained via column transformations in SSIS

I need to export varbinary data to file. But, when I do it using Column Transformations in SSIS, the exported files are corrupt. There are few junk characters at the start of the file. On removing them, the file opens fine.
A similar post for BCP, says that these characters specify the data length.
Would like to know how to address this issue in SSIS?
Thanks
Export transformation is used for converting the varbinary to files.I have tried something similar using Adventure works which has image type of var-binary data.
Following Query is used for the Source query. I have Modified the query
since it does not have the full path to write image files.
SELECT [ProductPhotoID]
,[ThumbNailPhoto]
,'D:\SSISTesting\ThumnailPhotos\'+[ThumbnailPhotoFileName]
,[LargePhoto]
,'D:\SSISTesting\LargePhotos\'+[LargePhotoFileName]
,[ModifiedDate]
FROM [Production].[ProductPhoto]
Used the Export column transformation[also available in 2005 and
2008] and configured as follows.
Mapped rest of the columns to the destination.
After running package all the image files are written into the
respective folders[D:\SSISTesting\ThumnailPhotos\ and D:\SSISTesting\LargePhotos].
Hope this helps!