The code is:
var rightNow = new Date();
console.log(rightNow);
In Chrome 66 it returns:
Tue Jun 12 2018 15:36:19 GMT+0530 (IST)
In Chrome 67 it returns:
Tue Jun 12 2018 15:36:43 GMT+0530 (India Standard Time)
Why the difference?
A lot of my code works with the behaviour in chrome 66. Do I need to change all the code?
In general, you should not code for a specific browser, but to the standards. In this case, the standard is ECMAScript (ECMA-262). Section 20.3.4.41 covers Date.prototype.toString(), which refers to the following section describing the internal ToDateString(tv) function, which explains:
Return an implementation-dependent String value that represents tv as a date and time in the current time zone using a convenient, human-readable form.
By "implementation-dependent", it means that the actual string value is not defined by the specification, and can vary from one implementation to another. You are not guaranteed the same result from one browser to the next, or from one version of a browser to the next, or even from one operating system to the next from the same version of the same browser.
By "human-readable form", it means that the value produced is suitable for display to a human being. It makes no guarantees about that value being represented in a manner that can be parsed consistently by computer code.
Thus, if you intend for the string value to be sent to other code, you should not use .toString(). In general, you should prefer an ISO 8601 formatted string for this purpose. Use .toISOString() if you want the result in terms of UTC. Refer to this answer (or use a library) if you want the result in terms of local time including a time zone offset.
As to why things changed between Chrome 66 and Chrome 67 - I don't have the exact details, but I assume Chrome switched from using IANA TZDB abbreviations to using CLDR time zone names, probably through its use of ICU. This is reasonable, and is what many other implementations are doing. There's no requirement it use one set of data or the other though, so don't rely on such things.
Related
The HTML5 specs for the time element have a note under the heading "A valid time-zone offset string" that says this:
For times without dates (or times referring to events that recur on multiple dates), specifying the geographic location that controls the time is usually more useful than specifying a time zone offset, because geographic locations change time zone offsets with daylight savings time. [...]
While I totally agree with this statement, I have been wondering - and this is my question - how can I specify a geographic location in the time element? I've been looking through the specs but I haven't found a clue. Additional web research also didn't yield any useful information. Can someone point me in the right direction?
BTW: I'm a beginner in web programming, and although this really seems to be just a minor detail I like to get things right from the start.
As far as I am aware, there is no way to specify <time> via region with raw HTML. I believe the documentation is simply stating that it's more useful to do it based on region, not that it is necessarily possible with raw HTML. This can certainly be achieved with a back-end language however, and injected into the <time> element (or datetime attribute).
Timezones can be specified with +, offset in relation to GMT:
<!-- GMT+1 (like Italy) -->
<time>+01:00</time>
And can be combined with fully-qualified times as well:
<!-- 16th September 2014 at 18 hours, 20 minutes, and 30 seconds
in a time zone of GMT+1 (like Italy) -->
<time>2014-09-16T18:20:30+01:00</time> in Italy
As is demonstrated above, perhaps the best you can do is explicitly state the relevant region, such as <time …>…</time> in Italy.
In order to retrieve the geographic timezone, IANA has a list of all applicable timezones per region.
Dates should be in the format yyyy-mm-ddTHH:MM[:SS[.mmm]] or yyyy-mm-dd HH:MM[:SS[.mmm]], where:
H stands for hours
M stands for minutes
S stands for seconds
m stands for milliseconds
The square brackets indicate the parts that are optional
Hope this helps! :)
From W3:
Definition and Usage
The tag defines a human-readable date/time.
This element can also be used to encode dates and times in a machine-readable way so that user agents can offer to add birthday reminders or scheduled events to the user's calendar, and search engines can produce smarter search results.
From Mozilla:
The HTML time element represents either a time on a 24-hour clock or a precise date in the Gregorian calendar (with optional time and timezone information).
So in other words, the time element isn't really supposed to be used for a precise geolocation, but maybe a timezone. For location, like #Ryan suggested, do something along the lines of <time …>…</time> in Paris
I am trying to pass a number (date in ms) to a function in a library module. The number gets screwed up!
Here is a simple look (function MailUtils.showNum has only one line, the same log call as seen below):
n = Number(todayMs - mbRetMs);
Logger.logDebug("Num = " + n + "; as Date = " + new Date(n));
MailUtils.showNum(n);
Log:
/* Num = 1500396760628; as Date = Tue Jul 18 2017 12:52:40 GMT-0400 (EDT) */
/* Num = 1453174324; as Date = Sat Jan 17 1970 14:39:34 GMT-0500 (EST) */
Seriously ???
What the * is happening? Looks like it somehow figures out it is a date and passes the origin date (the date the ms are counted from)?
LOL, Int32 Overflow #Fail. What you’re seeing is your original [64-bit] integer's 32 Least Significant Bits; the high bits have all been stripped.
Odd in itself, since JS uses Double internally; presumably a bug in JXA. (It has a lot of those.)
If you like JavaScript I strongly recommend using Node.js instead. JXA is rubbish in comparison, and with OSA technologies now in maintenance mode I don’t imagine it’ll ever get fixed.
MINOR EDIT: I say below that JPL's Horizons library is not open source. Actually, it is, and it's available here: http://naif.jpl.nasa.gov/naif/tutorials.html
At 2013-01-01 00:00:00 UTC at 0 degrees north latitude, 0 degrees east
latitude, sea level elevation, what is the J2000 epoch right ascension
and declination of the moon?
Sadly, different libraries give slightly different answers. Converted
to degrees, the summarized results (RA first):
Stellarium: 141.9408333000, 9.8899166666 [precision: .0004166640, .0000277777]
Pyephem: 142.1278749990, 9.8274722221 [precision .0000416655, .0000277777]
Libnova: 141.320712606865, 9.76909442356909 [precision unknown]
Horizons: 141.9455833320, 9.8878888888 [precision: .0000416655, .0000277777]
My question: why? Notes:
I realize these differences are small, but:
I use pyephem and libnova to calculate sun/moon rise/set, and
these times can be very sensitive to position at higher latitudes
(eg, midnight sun).
I can understand JPL's Horizons library not being open source,
but the other three are. Shouldn't someone work out the
differences in these libraries and merge them? This is my main
complaint. Do the stellarium/pyephem/libnova library authors have
a fundamental difference in how to make these calculations, or do
they just need to merge their code?
I also realize there might be other reasons the calculations are
different, and would appreciate any help in rectifying these
possible errors:
Pyephem and Libnova may be using the epoch of the date instead of J2000
The moon is close enough that observer location can affect its
RA/DEC (parallax effect).
I'm using Perl's Astro::Nova and Python's pyephem, not the
original C implementations of these libraries. However, if these
differences are caused by using Perl/Python, that is important in
my opinion.
My code (w/ raw results):
First, Perl and Astro::Nova:
#!/bin/perl
# RA/DEC of moon at 0N 0E at 0000 UTC 01 Jan 2013
use Astro::Nova;
# 1356998400 == 01 Jan 2013 0000 UTC
$jd = Astro::Nova::get_julian_from_timet(1356998400);
$coords = Astro::Nova::get_lunar_equ_coords($jd);
print join(",",($coords->get_ra(), $coords->get_dec())),"\n";
RESULT: 141.320712606865,9.76909442356909
- Second, Python and pyephem:
#!/usr/local/bin/python
# RA/DEC of moon at 0N 0E at 0000 UTC 01 Jan 2013
import ephem; e = ephem.Observer(); e.date = '2013/01/01 00:00:00';
moon = ephem.Moon(); moon.compute(e); print moon.ra, moon.dec
RESULT: 9:28:30.69 9:49:38.9
- The stellarium result (snapshot):
- The JPL Horizons result (snapshot):
[JPL Horizons requires POST data (not really, but pretend), so I
couldn't post a URL].
I haven't linked them (lazy), but I believe there are many
unanswered questions on stackoverflow that effectively reduce to
this question (inconsistency of precision astronomical libraries),
including some of my own questions.
I'm playing w this stuff at: https://github.com/barrycarter/bcapps/tree/master/ASTRO
I have no idea what Stellarium is doing, but I think I know about the other three. You are correct that only Horizons is using J2000 instead of the epoch-of-date for this apparent, locale-specific observation. You can bring it into close agreement with PyEphem by clicking "change" next to the "Table Settings" and switching from "1. Astrometric RA & DEC" to "2. Apparent RA & DEC."
The difference with Libnova is a bit trickier, but my late-night guess is that Libnova uses UT instead of Ephemeris Time, and so to make PyEphem give the same answer you have to convert from one time to the other:
import ephem
moon, e = ephem.Moon(), ephem.Observer()
e.date = '2013/01/01 00:00:00'
e.date -= ephem.delta_t() * ephem.second
moon.compute(e)
print moon.a_ra / ephem.degree, moon.a_dec / ephem.degree
This outputs:
141.320681918 9.77023197401
Which is, at least, much closer than before. Note that you might also want to do this in your PyEphem code if you want it to ignore refraction like you have asked Horizons to; though for this particular observation I am not seeing it make any difference:
e.pressure = 0
Any residual difference is probably (but not definitely; there could be other sources of error that are not occurring to me right now) due to the different programs using different formulae to predict where the planets will be. PyEphem uses the old but popular VSOP87. Horizons uses the much more recent — and exact — DE405 and DE406, as stated in its output. I do not know what models of the solar system the other products use.
In a multi-part (i.e. Content-Type=multipart/form-data) form, is there an upper limit on the length of the boundary string that an HTTP server should accept?
As far as I can tell, the relevant RFCs say 70 chars:
RFC2616 (HTTP/1.1) section "3.7 Media Types" says that the allowed types in the Content-Type header is defined by RFC1590 (Media Type Registration Procedure).
RFC1590 updates RFC-1521(MIME).
RFC1521 says that a boundary "must be no longer than 70 characters, not counting the two leading hyphens".
The same text also appears in RFC2046 which supposedly obsoletes RFC1521.
So can I be certain all the major HTTP/1.1 browsers out there today adhere to this limit? Are there any browsers (or other HTTP clients/libraries) known to break this limit?
Is there some other spec or common rule-of-thumb I'm missing that says the string will be shorter than 70 chars? In Chrome(ium) I get something like this: ----WebKitFormBoundaryLu4dNSGEhJZUgoe5, which is obviously shorter than 70 chars.
I'm asking this question because my server is running in an extremely memory-constrained environment, so "malloc a buffer large enough to hold the entire header string" is not an ideal answer.
As you note, RFC 2046 updated the MIME spec, but kept the restriction of the maximum boundary string to 70 characters, not counting the two leading hyphens.
I think it's a fair assumption that the spec is followed by all major browsers (and all MIME-using clients, like mail programs) since otherwise passing around multipart data would be very risky indeed.
To be sure, I've experimentally verified it for you using the latest versions of:
curl: ----------------------------5a56a6c893f2 (40)
Chrome 30 (WebKit): ----WebKitFormBoundarym0vCJKBpUYdCIWQG (38)
Safari 6 (WebKit, and same as Chrome): ----WebKitFormBoundaryFHUXvJBZwO2JKkNa (38)
FireFox 24: ---------------------------7096603861379320641089344535 (55)
IE 10: ---------------------------7dd1961640278 (40) - same technique as curl!
Apache HttpClient: -----------------------------1294919323195 (42)
Thus not only does every major browser/client conform, but all would allow you to save 15 allocated bytes per boundary per buffer from the theoretical max. If you could trivially switch on user agent, you could squeeze even more performance out. ;-)
I am attempting to emulate a (no longer existing) mainframe report generator in an Access 2003 or Access 2010 environment. The data it generates must match exactly with paper reports from the early 70s. Unfortunately, the earliest years data were run on hardware that used IBM floating point representation instead of IEEE. With the help of Google, I've found a library of VBA functions that will convert a float from decimal to the IEEE 754 32bit binary format. I had to modify the library to accept either 32bit or 64bit floats, so I have a modest working knowledge of floating point formats, however, I'm having trouble making the conversion from IEEE to IBM binary format, as well as trouble multiplying and adding either the IBM or the IEEE numbers.
I haven't turned up any other libraries for performing this conversion and arithmetic operations in VBA - is there an easier way to go about this, or an existing library that I'm not finding? Failing that, a clear and straightforward explanation of the relevant algorithms?
Thanks in advance.
To be honest you'd probably do better to start by looking at the Hercules emulator.
http://www.hercules-390.org/ Other than that in theory with VBA you can use the Decimal type to get good results (note you have to CDec to create these) it uses 12 bits with a variable power of ten scalar.
A quick google shows this post from the hercules group, which confirms Alberts point about needing to know the hardware:
---Snip--
In theory, but rather less so in practice. S/360 and S/370 had a
choice of Scientific or Commercial instruction sets. The former added
the FP instructions and registers to the base; the latter the decimal
instructions, including Edit and Edit & Mark. But larger 360 (iirc /65
and up) and 370 (/155 and up) models had the union of the two, called
the Universal instruction set, and at some point the S/370 dropped the
option.
---snip---
I have to say that having looked at the hercules source code you'll probably need to figure out exactly which floating point operation codes (in terms of precision single,long, extended) are being performed.
The problem is here's your confusing the issue of decimal type in access, and that of single and double type floating point values available in access.
If you use the currency data type in access, this is a scaled integer, and will not produce rounding (that is what most of us use for financial calculations and reports). You can also use decimal values in access, and again they don't round at all as they are packed decimals.
However, both the single and double values available inside of access are in fact the same format and conform to the IEEE floating point standard.
For an access single variable, this is a 32bit number, and the range is:
-3.402823E38
to
-1.401298E-45 for negative values
and
1.401298E-45
to
3.402823E38 for positive values
That looks to be the same to me as the IEEE 754 standard.
So, if you add up values in access as a single, you should get the rouding same results.
So, Intel based, and Access single and doubles I believe are the same as this IEEE standard.
The only real issue it and here is what is the format of the original data you're pulling into access, and what kinds of text or string or conversion process is occurring when that data is pulled in and stored?
Access can convert numbers. Try typing these values at the access command line prompt (debug window)
? hex(255)
Above will show FF
? csng(&hFF)
Above will show 255
Edit:
Ah, ok, I see now I have this reversed, my wrong here. The problem here is assuming you convert a number to the older IBM format (Excess 64?), you will THEN have to get your hands on their code that they used for adding those numbers. In fact, even back then, different IBM models depending on what you purchased actually produced different results (more money = more precision).
So, not only do you need conversion routines to convert to the internal representation, you THEN need the routines that add/subtract/multiply those numbers. So, just having conversion routines is not going to get you very far, since you also have to duplicate their exact routines that do math. Those types of routines are likely not all created equal in terms of how they round numbers etc.