Rails' :timestamp column type lies; it's actually just an alias for :datetime.
I'm using mysql, and I want to use actual unix-timestamp TIMESTAMP columns.
a) Is there a nice way to set this, other than just making the column using SQL?
b) Will ActiveRecord cope with it properly (e.g. converting to Time when necessary, accepting a unix timestamp Integer as input, etc)? What gotchas should I expect to have to handle, and where?
Why:
Speed. This is for an extremely active table that's aggregating outside data sources that already use unix timestamps. Converting to datetime (or even converting first to a db string, which goes through 2 gsubs) uses up the majority of its import time. I could otherwise be doing just a dirt cheap Integer#to_s call.
Timezones. I don't want 'em. I want it stored timezone-agnostically; dealing with timezones is a pain and is completely irrelevant to my needs except at the very final stage before individual user display. The data itself has no need to know what timezone it was recorded in.
Size. It's a large table. TIMESTAMP is half the size of DATETIME.
Yes, I would still be doing updated_at calculations in code, not mysql. That part isn't a bottleneck.
Why your 'why not' is wrong (preƫmptively, to show I'm not asking for noobish reasons :-P):
"But TIMESTAMP auto updates": That's only true by default, and can be easily switched off.
I'm actually not using Rails, just ActiveRecord.
Yes, this is based on actual profiling data; I am not early optimizing. ActiveRecord::ConnectionAdapters::AbstractMysqlAdapter#quote (in Quoting#quoted_date [if passing Time] or Mysql2Adapter#quote_string [if preconverting to_s(:db)]) is actually the most CPU-consuming section of my scraper. I want rid of it.
this works (just added whitespace character to type definition, so :timestamp doesn't override it):
t.add_column :sometable, :created_at, 'timestamp '
I'm pretty noobish, but I'll give it a shot. What if you were to add your own custom column, or overwrite the default ones? You can use custom data types with a string like so:
t.add_column :mysql_timestamp, 'timestamp'
and then somewhere else in your logic
def mysql_timestamp
Time.now.strftime("%Y-%m-%d %H:%M:%S")
end
Not sure about b). Only one way to find out!
Related
I've a table on ArcGis which contains nummbers and dates. I need to filter these via a sql-query. I just have the possibility to change the where clause.
See here: https://services3.arcgis.com/rKOPqLnqVBkPP9th/arcgis/rest/services/Arbeitsmappe1/FeatureServer/0//query
Just type in the where clause 1=1 and outfield * then you will get all results.
I have to filter installierte_leistung which contains numbers in the following formats:
1.050,20 ; 18; 0,1 ; 1.230
and dates of following format: 11.04.08
wished filters:
installierte_leistung: I want to execute a sql-statement like: where (installierte_leistung BETWEEN '1' AND '2'). In the result there is also the 18. Or if I ask for values greater 10 it shows me also the 1.050,20.
I tried to convert with cast and convert to decimal, signed, unsigned, integer and so on, but the query has been always invalid. I tried with 'number' and with number and with "number". lowercase and uppercase and almost all thinkable possibilities. I get no results with cast or convert.
Same issue with the Date. I want to filter monthly. so means between 01.2008 and 09.2009 for instance.
Could someone please help me? Thanks a lot!
Falk
I had a similar problem in the past with nested query. The more database specific queries (like cast and so) don't work because ArcGIS server is by default configured to work only with standardized queries. If you need to use more specific queries you have to change "standardizedQueries": "false" in server setting, check here how (bottom of the page): http://resources.arcgis.com/en/help/main/10.2/index.html#//015400000641000000. Should work for you. Good luck.
I am doing a series of updates on some tables after I import them from tab-separated values. The data comes with dates in a format I do not like. I bring them in as strings, manipulate them so that they are in the same format as MySQL dates and then convert the column. Or sometimes not, but I want them to be like MySQL dates even if they are strings.
They start out like '1/4/2013 12:00:00 AM' or '11/4/2012 2:37:45 PM'.
I turn these into '2013-01-04' (usually, since times are present even when the original schema clearly specifies dates only) and '2012-11-04 14:37:45'.
I am using rlike. And this does not use indexes? Wow. That sucks.
But already, for each column, I have to use 4 updates to handle the different cases ('1/7', '2/13', '11/2', '12/24'). If I did these using like, it might take 16 different updates for each column....
And, if I am seeing it right, I cannot even get positional parameters out of the rlike expression, yes? You know, the part of the expression wrapped in parentheses that becomes $1 or $2....
So, it seems as though it is going to be quicker to pre-process the tsv file with perl. Really? Wow. Again, this sucks.
Any other suggestions? I cannot have this taking 3 hours every time I need to pull in the data.
Recall the classic 1997 quote from Jamie Zawinski:
Some people, when confronted with a problem, think "I know, I'll use regular expressions."
Now they have two problems.
Have you tried using STR_TO_DATE()? This is exactly for parsing nonstandard date/time strings into canonical datetime values.
If you try parsing with STR_TO_DATE() and the string doesn't match the expected format, the function returns NULL.
So you could try parsing in different formats, and return the first one that gives a non-null result.
UPDATE mytable
SET datecolumn = COALESCE(
STR_TO_DATE(stringcolumn, '%m/%d'),
STR_TO_DATE(stringcolumn, '%d/%m/%Y'),
...etc.
);
I can't tell what your different cases are. It might or might not be possible to cover all cases in one pass.
Another alternative is as you say, preprocess the raw data with Perl before you load it into MySQL. But even then, don't fight with regular expressions, use Date::Parse instead.
The main problem is, that I have stored in database datetime , not the date (what I need). Ok never mind.
I have thousands of reports stored each day.
I need to LEFT by 10 my datetime_view (to cut the time) and everything's fine. Except this. I'm trying to figure out why do I have to put in the condition + one day from the future? Otherwise it won't search what I want.
SELECT
LEFT(datetime_view,10),
count(type)
FROM reports
WHERE
type IN (1,2,5)
AND
datetime_view>='2012-10-28'
AND
datetime_view<='2012-11-04'
group by LEFT(datetime_view,10);
You can see I must search from the future. Why??
It gives me an output from 28.10 to 3.11 ....
don't use string operations on date/time values. MySQL has a huge set of functions for date/time manipulation. Try
GROUP BY DATE(datetime_view)
instead, which will extract only the date portion of the datetime field. Your string operation is not y10k compliant. Using the date() function is.
As for your plus one day, consider how the comparisons are done: A plain date value, when used in date/time comparisons, has an implicit 00:00:00 time value attached to it, e.g. all dates have a time of "midnight".
i think it's better to use DATE(datetime_view) to cut the time instead of LEFT(datetime_view,10), also on the where condition:
DATE(datetime_view) <= '2012-11-03'
I am using the joda-time grails plugin in my grails app to deal with durations. The plugin saves durations decoded in milliseconds in the database, prefixes it with "PT" and suffixes with "s". Example 1 minute = PT3600S
Now I want to order this row. As it is a varchar ordering gets complicated or better not as I want it to be ordered :(
result
PT12551S
PT21142S
PT23240S
PT4672S
PT4792S
PT4877S
expected (if it type is int...)
PT4672S
PT4792S
PT4877S
PT12551S
PT21142S
PT23240S
Any idea how to work around this? Can I change the format of the data that is stored in db? Can I change the ordering to regard the length of the entry?
I figured out the best way to do it is not to use this kind of shortcut for now. I am now saving the durations as integers. That way it is easy to order and add.
models.FloatField creates a double in MySQL.
Is it possible, and how, to create a float precision field instead of double precision?
The justification is similar to that of having SmallIntegerField.
Well there is a better way than that and a much easier one. I also wanted the same thing with my db, when I came across the following db_type() method in django.
First, you need to create a custom Field in django by inheriting the Field class
class customFloatField(models.Field):
def db_type(self,connection):
return 'float'
then you could use this as any other model field in your model class
number = customFloatField(null=True,blank=True)
I tried and this does work for me in MySQL. To change it as per the connection type, you would have to check the connection setting in an if statement and change accordingly.
More about this is mentioned in https://docs.djangoproject.com/en/dev/howto/custom-model-fields/
There are a few options to do this, but I don't understand why you would want to.
Change it in the database, won't work when recreating the tables but Django won't do a manual cast so if the database changes, so do the results.
Create a custom fieldtype (i.e. inherit FloatField and change get_internal_type() so it returns something like SinglePrecisionFloatField. After that you will also need to create your own database backend and add your custom type to creation.DatabaseCreation.data_types (source: http://code.djangoproject.com/browser/django/trunk/django/db/backends/mysql/creation.py)
Change every FloatField to single precision. Like above you would have to create your own database backend and/or change the FloatField implementation in the current one.