DateTimeOffset Support in Scichart - scichart

Is it possible to use DateTimeOffset as the X-Axis data type in scicharts?
I've attempted creating a DataSeries of
DataSeries<DateTimeOffset,double>
But get a runtime exception of
"Cannot create a DataDistributionCalculator for the type TX=System.DateTimeOffset"

According to the SciChart documentation for DataSeries, supported datatypes are as follows:
NOTE: Allowable types in SciChart include DateTime, TimeSpan, Int64, Int32, Int16, Byte, Double, Float, UInt64, UInt32, UInt16, SByte.
DateTime, TimeSpan are only allowable on TX. The type Decimal (128) bit is not allowed. Custom types are not allowed.
As a result it is not possible to declare a custom type for the TX in a DataSeries or an XAxis.
However, you may be able to achieve what you want by using the LabelProvider feature. If your goal is to allow offsetting DateTime by a fixed amount then the LabelProvider lets you format strings on the XAxis using a custom rule in code.
Is that what you need?

Related

Is there a way to encode integer values that are larger than 32 bits using bs-json?

I've been using strings to represent decoded JSON integers larger than 32 bits. It seems the string_of_int is capable of dealing with large integer inputs. So a decoder, written (in the Json.Decode namespace):
id: json |> field("id", int) |> string_of_int, /* 'id' is string */
is succefully dealing with integers of at least 37 bits.
Encoding, on the other hand, is proving troublesome for me. The remote server won't accept a string representation, and is expecting an int64. Is it possible to make bs-json support the int64 type? I was hoping something like this could be made to work:
type myData = { id: int64 };
let encodeMyData = (data:myData) => Json.Encode.(object_([("id", int64(myData.id)]))
Having to roll my own encoder is not nearly as formidable as a decoder, but ... I'd rather not.
You don't say exactly what problem you have with encoding. The int encoder does literally nothing except change the type, trusting that the int value is actually valid. So I would assume it's the int_of_string operation that causes problems. But that begs the question, if you can successfully decode it as an int, why are you then converting it to a string?
The underlying problem here is that JavaScript doesn't have 64 bit integers. The max safe integer is 253 - 1. JavaScript doesn't actually have integers at all, only floats, which can represent a certain range of integers, but can't efficiently do integer arithmetic unless they're converted to either 32-bit or 64-bit ints. And so for whatever reason, probably consistent overflow handling, it was decided in the EcmaScript specification that binary bitwise operations should operate on 32-bit integers. And so that opened the possibility for an internal 32-bit representation, a notation for creating 32-bit integers, and the possibility of optimized integer arithmetic on those.
So to your question:
Would it be "safe" to just add external int64 : int64 -> Js.Json.t = "%identity" to the encoder files?
No, because there's no 64-bit integer representation in JavaScript, int64 values are represented as an array of two Numbers I believe, but is also an internal implementation detail that's subject to change. Just casting it to Js.Json.t will not yield the result you expect.
So what can you do then?
I would recommend using float. In most respects this will behave exactly like JavaScript numbers, giving you access to its full range.
Alternatively you can use nativeint, which should behave like floats except for division, where the result is truncated to a 32-bit integer.
Lastly, you could also implement your own int_of_string to create an int that is technically out of range by using a couple of lightweight JavaScript functions directly, though I wouldn't really recommend doing this:
let bad_int_of_string = str =>
str |> Js.Float.fromString |> Js.Math.floor_int;

XSLT 3.0 transformation of JSON to XML -- numeric data types

SUMMARY
some support for JSON was added to XSLT 3.0 + XPath/XQuery 3.1
unfortunately, JSON number types are handled as IEEE double, subjecting the data to loss of numeric precision
I am considering writing a set of custom functions based on Java BigDecimal instead of IEEE double
Q: In order to support numeric precision beyond that offered by IEEE double, is it reasonable for me to consider cloning the JSON support in saxon 9.8 HE and building a set of customized functions which use BigDecimal instead of IEEE double?
DETAIL
I need to perform a number of transformations of JSON data.
XSLT 3.0 + XPath 3.1 + XQuery 3.1 have some support for JSON through json-to-xml + parse-json.
https://www.w3.org/TR/xpath-functions-31/#json-functions
https://www.saxonica.com/papers/xmlprague-2016mhk.pdf
I have hit a significant snag related to treatment of numeric data types.
My JSON data includes numeric values that exceed the precision of IEEE double-floats. In Java, my numeric values need to be processed using BigDecimal.
https://www.w3.org/TR/xpath-functions-31/#json-to-xml-mapping
states
Information may however be lost if (a) JSON numbers are not exactly representable as double-precision floating point ...
In addition, I have taken a look at the saxonica 9.8 HE reference implementation source for ./ma/json/JsonParser.java and confirm that the private method parseNumericLiteral() returns a primitive double.
I am considering cloning the saxon 9.8 HE JSON support code and using this as the basis for a set of customized functions which uses Java BigDecimal instead of double in order to retain numeric precision through the transformations ...
Q: In order to support numeric precision beyond that offered by IEEE double, is it reasonable for me to consider cloning the JSON support in saxon 9.8 HE and building a set of customized functions which use BigDecimal instead of IEEE double?
Q: Are you aware of any unforeseen issues which I may encounter?
The XML data model defines decimal numbers as having any finite precision.
https://www.w3.org/TR/xmlschema-2/#decimal
The JSON data model defines numbers as having any finite precision.
https://www.rfc-editor.org/rfc/rfc7159#page-6
Not surprisingly, both warn of potential interoperability issues with numeric values with extended precision.
Q: What was the rationale for explicitly defining the JSON number type in XPath/XQuery as IEEE double?
THE END
This is what the RFC says:
This specification allows implementations to set limits on the range
and precision of numbers accepted. Since software that implements
IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is
generally available and widely used, good interoperability can be
achieved by implementations that expect no more precision or range
than these provide, in the sense that implementations will
approximate JSON numbers within the expected precision. A JSON
number such as 1E400 or 3.141592653589793238462643383279 may indicate
potential interoperability problems, since it suggests that the
software that created it expects receiving software to have greater
capabilities for numeric magnitude and precision than is widely
available.
That, to my mind, is a pretty clear warning: it says that although the JSON grammar allows arbitrary precision in numeric values, you can't rely on JSON consumers to retain that precision, and it follows that if you want to convey high-precision numeric values, it would be better to convey them as strings.
The rules for fn:json-to-xml and fn:xml-to-json need to be read carefully:
The fn:json-to-xml function creates an element whose string value is
lexically the same as the JSON representation of the number. The
fn:xml-to-json function generates a JSON representation that is the
result of casting the (typed or untyped) value of the node to
xs:double and then casting the result to xs:string. Leading and
trailing whitespace is accepted. Since JSON does not impose limits on
the range or precision of numbers, these rules mean that conversion
from JSON to XML will always succeed, and will retain full precision
in the lexical representation unless the data model implementation is
one that reconstructs the string value from the typed value. In the
reverse direction, conversion from XML to JSON may fail if the value
is infinity or NaN, or if the string value is such that casting to
xs:double produces positive or negative infinity.
Although I probably wrote these words, I'm not sure I recall the exact rationale for why the decision was made this way, but it does suggest that the matter received careful thought. I suspect the thinking was that when you consume JSON, you should try to preserve all the information that is present in the input, but when you generate JSON, you should try to generate something that will be acceptable to all consumers. (The famous maxim about being liberal in what you accept and conservative in what you produce.)
Your analysis of the Saxon source isn't quite correct. You say:
the private method parseNumericLiteral() returns a primitive double.
which is true enough; but the original lexical representation is retained, and when the parser communicates the value to a JsonReceiver, it passes both the Java double and the string representation, so the JsonReceiver has access to both (which is needed for a correct implementation of fn:json-to-xml).

ANY , NONE and Unit in Nim

i couldn't find any specific information in the manual.
can anyone clarify how does ANY , NONE and type unit are reflected in Nim?
short definitions -
a unit type is a type that allows only one value (and thus can hold no information). The carrier (underlying set) associated with a unit type can be any singleton set. There is an isomorphism between any two such sets, so it is customary to talk about the unit type and ignore the details of its value. One may also regard the unit type as the type of 0-tuples, i.e. the product of no types.
ANY -
type ANY also known as ALL or Top , is the universal set. (all possible values).
NONE- the "empty set"
thank you!
Your question seems to be about sets. Let's have a look:
let emptySet: set[int8] = {}
This is an empty set of type int8. The {} literal for the empty set is implicitly casted to any actual set type.
let singletonSet = {1'i8}
This is a set containing exactly one value (a unit type if I understand it correctly). The type of the set can now be automatically deduced from the type of the single value in it.
let completeSet = {low(int8) .. high(int8)}
This set holds all possible int8 values.
The builtin set type is implemented as bitvector and thus can only be used for value types which can hold only a small set of possible values (for int8, the bitvector is already 256 bits long). Besides int8, it is usually used for char and enumeration types.
Then there is HashSet from the module sets which can hold larger types. However, if you construct a HashSet containing all possible values, memory consumption will probably be enormous.
Nim is not a functional language, and never claims to be one. There is no equivalent of these types, and the solution is more like the road that c++ takes.
There is void, wich is closest to what Unit is. The Any type does not exist, but there is the untyped pointer. But that type does not hold any type information in it, so you need to know what you can cast it to. And for NONE, or Nothing how I know it from scala, you have to use void, too. But here you can add the noReturn pragma.

Marshalling of a 32 bit int to a 16 bit int maching

I want to implement and understand the concept of marshalling over my own rpc mechanism (toy really). While I get the idea behind endian-ness, i am not sure how to handle 32bit and 16 bit ints. So the problem is that machine A has int represented at 32 bit and it wants to call a function int foo(int x) over an rpc call; however the server where this int is represented is 16 bit. Sending just the lower 16 bits would loose information and is not desirable.
I know IDL's work to solve this problem. But in this case lets say I use an IDL that "defines" int to be 32 bit. While this case works for my scenario, in the case of machine A with 16 bit int, 2 bytes will always be wasted when transmitting over the network.
If we flip the IDL to be 16bit, then the user has to manually split its local int and do something fancy, completely breaking the transparency of an RPC.
So what is the right way used in actual implementations?
thanks.
Usually, IDLs define several platform independent types (UInt8, Int8, UInt16, Int16, UInt32, Int32, UInt64, Int64) and few platform dependant, such as int, uint. The platform dependant types have only limited use, such as a size/index of arrays. It is recommended to use platform independent types for everything else.
If a parameter is declared in IDL as Int32 then on any platform it MUST be Int32. If it's declared as Int, then it depends on platform.
For example, COM VARENUM and VARIANT , as you can see there are platform independent types (such as SHORT (VT_UI2), LONG (VT_UI4), LONGLONG (VT_UI8)) and also machine types (such as INT (VT_INT)).

What's the difference between MySQL BOOL and BOOLEAN column data types?

I'm using MySQL version 5.1.49-1ubuntu8.1. It allows me to define columns of two different data types: BOOL and BOOLEAN. What are the differences between the two types?
They are both synonyms for TINYINT(1).
As established in other comments, they're synonyms for TINYINT(1).
*So, why do they bother differentiating between bool, boolean, tiny*int(1)?
Mostly semantics.
Bool and Boolean: MySQL default converts these to the tinyint type. Per a MySQL statement made around the time of this writing, "We intend to implement full boolean type handling, in accordance with standard SQL, in a future MySQL release."
0 = FALSE
1 = TRUE
TINYINT: Occupies one byte; ranges from -128 to +127; or, 0 – 256.
Commonly brought up in this comparison:
After MySQL 5.0.3 -- Bit: Uses 8 bytes and stores only binary data.
One thing I just noticed - with a column defined as BOOL in MySql, Spring Roo correctly generates Java code to unmarshall the value to a Boolean, so presumably specifying BOOL can add some value, even if it's only in the nature of a hint about the intended use of the column.
check the MySQL docs overview of numeric types:
http://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html