I'm trying to convert a date type column into a nice human readable string like so: 25/11/2016 (or any other masks I'd like to use)
Does Big Query supports masks when using dates? When I use the Date() Functions it returns something like "2016-05-05" but that's not the standard pattern in many countries.
I've searched for a lot of different things the closes thing I got is this doc: https://cloud.google.com/bigquery/query-reference but I didn't see anythin that would help me
check STRFTIME_UTC_USEC
SELECT STRFTIME_UTC_USEC(CURRENT_DATE(), '%d/%m/%Y')
Related
I was wondering if there is any way to use custom formats when using the FORMAT function.
Something like:
SELECT FORMAT(money,"##.###,00") FROM account
or any other way to format values directly inside the query.
This is actually to find a workaround for a "wrong" number format for the it_IT locale (at least in our case), which results as #####,00 instead of ##.###,00 .
This looks like a known bug as stated in https://bugs.mysql.com/bug.php?id=73436 .
STR_TO_DATE(string_time,'%Y-%m-%d %H:%i:%s') > (select timestamp from table)
this line causes the error 'Function str_to_date not registered' in Athena. is there any way to work around this problem?
All databases have their own set of functions, even though some are common and exist in more than one. STR_TO_DATE is not available in Athena, but there are lots of other date and time functions that can be used to achieve the same goal.
You can find links to all functions supported by Athena here: https://docs.aws.amazon.com/athena/latest/ug/presto-functions.html
In your case I think you can use either parse_datetime, which looks like it works like STR_TO_DATE in your example.
Alternatively I think you could cast the string to a timestamp since the format you are using matches Athena's, try CAST(string_time AS TIMESTAMP)
Asking for any ideas to convert this kind of date in SQL from May-15-2020 18:03 to 'yyyyMMddHHmiss' or 'yyyyMMdd'.
I am trying this query
select from_unixtime(unix_timestamp('May-15-2020 16:03', 'MM-dd-yyyy
HH:mi'), 'yyyyMMdd') from dual
but it wont work.
Use the right right function STR_TO_DATE, and use a format that matches your date rather than something scraped from a previous answer/blog.
Reference manuals of date and time functions are very useful for solving these basic problems.
I need to import a large file of csv data into MySQL, and when I attempted to use MySQL's unix_timestamp function to import the dates, about half of the records didn't make it.
As far as I can tell, the datetime values are formatted with either a single first "month" digit or two of them, and the same goes with the day of the month (e.g. 6/6/2014 3:48PM vs. 12/16/2014 3:48PM) This throws off the import completely (well about half of the records won't import).
I'm trying to convert this into a unix_timestamp.
Now I know I could write a script with a regex to fix something like this, but I am wondering is there a simpler way to do a mass import like this? For the record, I am using my text editor to write the sql statements from the csv as "insert into" statements. This is where I tried to use date formatting but it seems to only accept one format.
Any way to do this with such a minor difference in input?
Actually, despite my comment, something like this might work:
COALESCE(STR_TO_DATE(val, "formatcandidate1")
, STR_TO_DATE(val, "formatcandidate2")
, STR_TO_DATE(val, "formatcandidate3")
, STR_TO_DATE(val, "formatcandidate4")
, [etc...]
) AS dateVal
There are online tools to do this kind of stuff
reports.zoho.com is one of them
In this tool you can import data applying a specific date format and skip the other rows.
and you can do the same for all the type of formats that are present in your file
and finally you can export the data with same date format for all the data
ask me any doubts if you have any regarding this :)
I am doing a series of updates on some tables after I import them from tab-separated values. The data comes with dates in a format I do not like. I bring them in as strings, manipulate them so that they are in the same format as MySQL dates and then convert the column. Or sometimes not, but I want them to be like MySQL dates even if they are strings.
They start out like '1/4/2013 12:00:00 AM' or '11/4/2012 2:37:45 PM'.
I turn these into '2013-01-04' (usually, since times are present even when the original schema clearly specifies dates only) and '2012-11-04 14:37:45'.
I am using rlike. And this does not use indexes? Wow. That sucks.
But already, for each column, I have to use 4 updates to handle the different cases ('1/7', '2/13', '11/2', '12/24'). If I did these using like, it might take 16 different updates for each column....
And, if I am seeing it right, I cannot even get positional parameters out of the rlike expression, yes? You know, the part of the expression wrapped in parentheses that becomes $1 or $2....
So, it seems as though it is going to be quicker to pre-process the tsv file with perl. Really? Wow. Again, this sucks.
Any other suggestions? I cannot have this taking 3 hours every time I need to pull in the data.
Recall the classic 1997 quote from Jamie Zawinski:
Some people, when confronted with a problem, think "I know, I'll use regular expressions."
Now they have two problems.
Have you tried using STR_TO_DATE()? This is exactly for parsing nonstandard date/time strings into canonical datetime values.
If you try parsing with STR_TO_DATE() and the string doesn't match the expected format, the function returns NULL.
So you could try parsing in different formats, and return the first one that gives a non-null result.
UPDATE mytable
SET datecolumn = COALESCE(
STR_TO_DATE(stringcolumn, '%m/%d'),
STR_TO_DATE(stringcolumn, '%d/%m/%Y'),
...etc.
);
I can't tell what your different cases are. It might or might not be possible to cover all cases in one pass.
Another alternative is as you say, preprocess the raw data with Perl before you load it into MySQL. But even then, don't fight with regular expressions, use Date::Parse instead.