I need to import in a SQLITE database a CSV file that use both numbers than strings: here you a sample ..
col_1|col_2|col_3
10|text2|http://www.google.com
For the import I use Spatialite GUI because I've to manage also spatial data: all works fine in the import but when I try to select the data
select * from test;
How I've to structure my CSV file to store my "text2" string?
I've solved in a different manner ....
Enter in Sqlite and give these commands:
CREATE TABLE ps_details(
col_1 TEXT,
col_2 TEXT,
col_3 TEXT
);
.mode csv
.separator |
.import test.csv test
.quit
You can save this in a file (es. test.txt) and then, in a second file named test.sh write
sqlite3 dbtest.sqlite < test.txt
save and change its permission (chmod 777), and then launch it form comand line
./test.sh
This will create a table test in your dbtest.sqlite getting data form test.csv file
It looks like you defined the type of col_2 as REAL, where it should be TEXT.
The structure of your CSV look ok.
Disclaimer: I have never used Spatialite, this is just from looking at the information you provided.
Related
I'm working with hive and i need to add data in json-format. I use https://github.com/rcongiu/Hive-JSON-Serde library. It loads data in hive from file.
~$ cat test.json
{"text":"foo","number":123}
{"text":"bar","number":345}
$ hadoop fs -put -f test.json /user/data/test.json
$ hive
hive> CREATE DATABASE test;
hive> CREATE EXTERNAL TABLE test ( text string )
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION '/user/data';
hive> SELECT * FROM test;
OK
foo 123
bar 345
But i need load data from query, like:
insert into table test values {"text": "abc", number: 666}
Who knows how do this?
The question seems old, however, in case someone looking for an answer:
I tried another approach as following:
Create table test (text string);
LOAD data inpath 'path/test.json' INTO TABLE test;
insert into table test values ("{'text':'abc','number':666}");
The only different is when you need to load values it will be something like:
select get_json_object(str,'$.text') as text1, get_json_object(str,'$.number') as number1 from test;
A SerDe is really intended for use with external tables which read the data from files containing the data. So it will not help you directly insert json data and the insert query you give as an example will not work as such. I suggest that you should either write the data to a file on your hdfs and create an external table on the folder containing the file, or parse incoming data such that you can insert it as columns.
I would like to export each table of my SQLite3 database to CSV files for further manipulation with Python and after that I want to export the CSV files into a different database format (PSQL). The ID column in SQLite3 is of type GUID, hence jiberrish when I export tables to CSV as text:
l_yQ��rG�M�2�"�o
I know that there is a way to turn it into a readable format since the SQLite Manager addon for Firefox does this automatically, sadly without reference regarding how or which query is used:
X'35B17880847326409E61DB91CC7B552E'
I know that QUOTE (GUID) displays the desired hexadecimal string, but I don't know how to dump it to the CSV instead of the BLOB.
I found out what my error was - not why it doesn't work, but how I get around it.
So I tried to export my tables as staded in https://www.sqlite.org/cli.html , namely a multiline command, which didn't work:
sqlite3 'path_to_db'
.headers on`
.mode csv
.output outfile.csv
SELECT statement
and so on.
I was testing a few things and since I'm lazy while testing, I used the single line variant, which got the job done:
sqlite3 -header -csv 'path_to_db' "SELECT QUOTE (ID) AS Hex_ID, * FROM Table" > 'output_file.csv'
Of course it would be better if I would specify all column names instead of using *, but this sufices as an example.
How do I import a .txt file into a MySQL table?
My .txt file is like this...
ex : AF0856427R1 000002200R HADISUMARNO GGKAMP MALANG WET 3 6 00705 AFAAADF16000-AD-FA P00.001.0 1 000001.00022947.70023290.00 T511060856425A 022014B
There are 39 fields in my file.
Try mysqlimport command
name of the text file should be the name of the table in which you want the data to be imported. For eg, if your file name is patient.txt, data will be imported into patient table
mysqlimport [options] db_name textfile
There are lot of options that you can pass in. Documentation here
Especially since some of your fields are terminated by spaces and some are based on string length, I would definitely first do some string manipulation with your favorite tool (sed, awk, and perl are all likely very good choices).
Create an intermediary comma separated file. If you have commas in the existing file, you can easily use some other character. The goal is to create a file that has one consistent separator.
You've used the phpMyAdmin tag, so from your table go to the Import tab, select the file, and pick CSV from the dropdown of file types. Edit the options according to what your file looks like (for instance, perhaps § is your column separator and you might leave the next two options blank). Then try the import and check the data to make sure it all arrived in the columns you expected.
Good luck.
I have a local file named UpdateTable.csv that looks like this:
Chromosome ProbeCount TranscriptCount
chr1 84453 2887
chr10 32012 1087
chr11 49780 1721
chr12 39723 1402
...etc
I just created a table, named "SUMMARY" that has the same row titles. I need to import the file into my table from my desktop..
Thank you for your help!
You can use Load data infile
Read more here:
http://dev.mysql.com/doc/refman/5.6/en/load-data.html
In your situation, something like:
LOAD DATA INFILE 'c:/users/USER_NAME/Desktop/file.csv'
INTO TABLE summary
FIELDS terminated by "\t"
LINES terminated by "\r\n"
Of course, this SQL is only an example. Please read the manual.
mysqlimport [options] db_name textfile1 [textfile2 ...]
For each text file named on the command line, mysqlimport strips any extension from the file name and uses the result to determine the name of the table into which to import the file's contents. For example, files named patient.txt, patient.text, and patient all would be imported into a table named patient.
So in your case, you would have to change your file name from UpdateTable.csv to SUMMARY.csv and remove the first two lines of that file. It would something like
mysqlimport --fields-escaped-by=, db_name SUMMARY.csv
**EDIT acutally one a second look, your file is not a csv (Comma Separated ...). Your file is tab separated and thus the argument for fields escaped by should be '\t'
How to export sqlite into CSV using RSqlite?
I am asking because I am not familiar with database files, so I want to convert it using R.
It may be very simple, but I haven't figure out.
not quite sure if you have figured this out. I am not quite sure how to do it within R either but it seems pretty simple to export to csv using SQLite itself, or by writing out csv from the database you have loaded to R.
In SQLite, you can do something like this at your command prompt
>.mode csv
>.export output.csv
>.header on
>select * from table_name;
>.exit
SQLite will automatically wrote out your table to a output.csv file
If the table is not too large, you can first export it into an data frame or matrix in R using the dbGetQuery or the dbSendQuery and fetch commands. Then, you can write that data frame as a .csv file.
my.data.frame <- dbGetQuery(My_conn, "SELECT * FROM My_Table")
write.csv(my.data.frame, file = "MyFileName.csv", ...)