In the table theGlobal from MySql Database I have the field theTime set on char 50 and the field theType set on char 1.
On table theGlobal import with LOAD DATA INFILE Syntax two different .csv files.
In the first .csv file I have this row:
"T","0:01"
"B","1:05"
The format of 0:01 and 1:05 is mm:ss
In the second .csv file I have this row:
"L","00:07:10"
"L","01:21:39"
The format of 00:07:10 and 01:21:39 is hh:mm:ss
The result of import in the table theGlobal transform the mm of first .csv on the hh and the ss of first .csv on the mm.
E.g:
+---------+----------+
| theType | theTime |
+---------+----------+
| B | 1:05:00 |
| T | 0:01:00 |
| L | 00:07:10 |
| L | 01:21:39 |
+---------+----------+
I need for all rows in the field theTime the format hh:mm:ss.
+---------+----------+
| theType | theTime |
+---------+----------+
| B | 00:01:05 |
| B | 00:00:01 |
| L | 00:07:10 |
| L | 01:21:39 |
+---------+----------+
How to resolve this?
Please help me, thank you so much in advance.
While loading the data, you can assign it to a variable first, then do whatever with the variable and load it in the actual column. In your case this would look something like this:
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, #var1)
SET column2 = TIME(STR_TO_DATE(#var1, '%i:%S'));
Adjust the STR_TO_DATE() parameter as needed. Here's a table explaining it (it's for date_format() but it's the same for str_to_date()).
Oh, and store the data in a time column, not varchar.
Related
I have a table of locations that are verified, like so:
+--------+----------+----------+
| idW | lat | lon |
+--------+----------+----------+
| 111650 | 47.20000 | 14.75000 |
| 111810 | 47.96412 | 16.25498 |
| 111820 | 47.83234 | 16.23143 |
+--------+----------+----------+
I also have a table of "all locations", whether verified or not. It looks like this (with lots of other columns I'm leaving out)
+--------+--------+----------+----------+
| id | idW | lat | lon |
+--------+--------+----------+----------+
| 100000 | 111650 | 47.20000 | 14.75000 |
| 100001 | 111712 | 42.96412 | 19.25498 |
| 100002 | 111820 | 47.83234 | 16.23143 |
+--------+--------+----------+----------+
What I would like to do is, for each verified location, find its "id" in the table of "all locations", and attach those as a new first column on the verified table (remembering that not all verified locations exist in all locations, so it's not as easy as copy and paste I don't think). Any ideas?
edit: The expected output from my example above would be
+--------+--------+----------+----------+
| id | idW | lat | lon |
+--------+--------+----------+----------+
| 100000 | 111650 | 47.20000 | 14.75000 |
| 100002 | 111820 | 47.83234 | 16.23143 |
| x | 111810 | 47.96412 | 16.25498 |
+--------+--------+----------+----------+
where x would be whatever value that 111810 had as its id in the all locations table.
The better option would be to only display the additional data when you query the database using in joins either in a normal query or in a view.
select t1.*, t2.field1, t2.field2 from t1 inner join t2 on t1.idW=t2.idW
You can copy the data over to your 1st table (there are valid reasons to do it, eg. optimalization of selects, but itás a rare case). You need to add the extra columns to your first table using alter table add column ... commands (or just use an sql editor app).
Then to copy the data over:
update t1, t2 set t1.fieldname=t2.fieldname where t1.idW=t2.idW
Since adding columns to a table is not really efficient, you may choose to create a 3rd table from the existing ones and copy the data over:
create table t3 as select t1.*, t2.fieldname1, t2.fieldname2
from t1 inner join t2 on t1.idW=t2.idW
If I understand correctly you want to add a new column to your original table. This can be done as
ALTER TABLE locations ADD COLUMN `id` INTEGER NULL DEFAULT NULL FIRST;
and afterwards you can populate it by getting the values from the verified locations table as
SET SQL_SAFE_UPDATES = 0;
UPDATE locations a SET id =
(SELECT id FROM verified_locations b
WHERE a.idW = b.idW AND a.lat = b.lat AND a.`long` = B.`long`
LIMIT 1);
The table test from my database has a unique ENUM column. How should I format my .txt file in order to load data from it into the column?
This is how I'm doing it right now:
text.txt:
0
1
2
2
1
MySQL Script:
LOAD DATA LOCAL INFILE 'Data/test.txt' INTO TABLE test
DESCRIBE test
+-------+-------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------------+------+-----+---------+-------+
| enum | enum('0','1','2') | YES | | NULL | |
+-------+-------------------+------+-----+---------+-------+
The output:
+------+
| enum |
+------+
| |
| |
| |
| |
| 1 |
+------+
The first (possible) bug is break line symbols, which is '\n' by default in unix systems. Check your file, a high probability that it is '\r\n', and add LINES TERMINATED clause -
LINES TERMINATED BY '\r\n'
The second bug - a file name, you wrote 'text.txt', but in LOAD DATA command you have used 'test.txt'.
LOAD DATA INFILE Syntax
I have a table with 2 Columns, filled with strings
CREATE TABLE [tbl_text]
(
[directoryName] nvarchar(200),
[text1] nvarchar(200),
[text2] nvarchar(200)
)
The Strings are build like the following
| Text1 | Text2 |
|------------|----------|
|tz1 tz3 tz2 | al1 al2 |
| tz1 tz3 | al1 al3 |
| tz2 | al3 |
| tz3 tz2 | al1 al2 |
Now i want to Count how many times the TestN or TextN are resulting in the
| Text1 | al1 | al2 | al3 |
|-------|------|------|------|
| tz1 | 2 | 1 | 1 |
| tz2 | 2 | 2 | 1 |
| tz3 | 3 | 2 | 1 |
i tried solving it with an sql-query like this:
TRANSFORM Count(tt.directoryName) AS Value
SELECT tt.Text1
FROM tbl_text as tt
GROUP BY tt.Text1
PIVOT tt.Text2;
This works fine if i got fields only with one value like the third column (the complete datasource has to be like a one-value-style)
But in my case i'm using the strings for a multiselect...
If i try to conform this query onto a datasource filled with the " " between the values the result is complete messed up
Any suggestions how the query should look like to get this result ?
You'll have to split the strings inside Text1/Text2 before you can do anything with them. In VBA, you'd loop a recordset, use the Split() function and insert the results into a temp table.
In Sql Server there are more powerful options available.
Coming from here: Split function equivalent in T-SQL? ,
you should read this page:
http://www.sommarskog.se/arrays-in-sql-2005.html#tablelists
I am trying to upload a CSV file(TSV actually) generated in mysql(using outfile) into Bigquery using bq tool. This table has following schema:
Here is the sample data file:
"6.02" "0000" "101" \N "Md Fiesta Chicken|1|6.69|M|300212|100|100^M Sourdough|1|0|M|51301|112|112" "6.5" \N "V03" "24270310376" "10/17/2014 3:34 PM" "6.02" "30103" "452" "302998" "2014-12-08 10:57:15" \N
And this is how I try to upload it using bq CLI tool:
$ bq load -F '\t' --quote '"' --allow_jagged_rows receipt_archive.receipts /tmp/rec.csv
BigQuery error in load operation: Error processing job
'circular-gist-812:bqjob_r8d0bbc3192b065_0000014ab097c63c_1': Too many errors encountered. Limit is: 0.
Failure details:
- File: 0 / Line:1 / Field:16: Could not parse '\N' as a timestamp.
Required format is YYYY-MM-DD HH:MM[:SS[.SSSSSS]]
I think the issue is that updated_at column is NULL & hence skipped. so any idea how can I tell it to consider null/empty columns?
CuriousMind - This isn't an answer. Just an example of the problem of using floats instead of decimals...
CREATE TABLE fd (f FLOAT(5,2),d DECIMAL(5,2));
INSERT INTO fd VALUES (100.30,100.30),(100.70,100.70;
SELECT * FROM fd;
+--------+--------+
| f | d |
+--------+--------+
| 100.30 | 100.30 |
| 100.70 | 100.70 |
+--------+--------+
SELECT f/3+f/3+f/3,d/3+d/3+d/3 FROM fd;
+-------------+-------------+
| f/3+f/3+f/3 | d/3+d/3+d/3 |
+-------------+-------------+
| 100.300003 | 100.300000 |
| 100.699997 | 100.700000 |
+-------------+-------------+
SELECT (f/3)*3,(d/3)*3 FROM fd;
+------------+------------+
| (f/3)*3 | (d/3)*3 |
+------------+------------+
| 100.300003 | 100.300000 |
| 100.699997 | 100.700000 |
+------------+------------+
But why is this a problem, I hear you ask?
Well, consider the following...
SELECT * FROM fd WHERE f <= 100.699997;
+--------+--------+
| f | d |
+--------+--------+
| 100.30 | 100.30 |
| 100.70 | 100.70 |
+--------+--------+
...now surely that's not what would be expected when dealing with money?
To specify "null" in a CSV file, elide all data for the field. (It looks like you are using an unspecified escape syntax "\N".)
For example:
$ echo 2, > rows.csv
$ bq load tmp.test rows.csv a:integer,b:integer
$ bq head tmp.test
+---+------+
| a | b |
+---+------+
| 2 | NULL |
+---+------+
I have cumulative input values that start life as smallints.
I read these values from a Access database, and aggregate them into a MySQL database.
Now I'm faced with input values of type smallint that are cumulative, thus always increasing.
Input Required output
---------------------------------
0 0
10000 10000
32000 32000
-31536 34000 //overflow in the input
-11536 54000
8464 74000
I process these values by inserting the raw data into a blackhole table and in the trigger to the blackhole I upgrade the data before inserting it into the actual table.
I know how to store the previous input and output, or if there is none, how to select the latest (and highest) inserted value.
But what's the easiest/fastest way to deal with the overflow, so I get the correct output.
Given you have a table named test with a primary key called id and the column is named value Then just do this:
SELECT
id,
test.value,
(SELECT SUM(value) FROM test AS a WHERE a.id <= test.id) as output
FROM test;
This would be the output:
------------------------
| id | value | output |
------------------------
| 1 | 10000 | 10000 |
| 2 | 32000 | 42000 |
| 3 | -31536 | 10464 |
| 4 | -11536 | -1072 |
| 5 | 8464 | 7392 |
------------------------
Hope this helps.
If it doesn't work, just convert your data to INT (or BIGINT for lots of data). It does not hurt and memory is cheap this days.