How to set precision for json_real while doing json_dump using jansson library - json

If a double value of 8.13 is passed into json_real and dump the json i see that its printing 8.1300000000000008, Is there any way to get 8.13 or 8.13000000000000000 exactly in C?
double test = 8.13;
json_t* msgtest = json_object();
json_object_set_new(msgtest, "test", json_real(test));
char* msgStr;
msgStr = json_dumps(msgtest, 0);

You can use JSON_REAL_PRECISION, introduced in jansson 2.7.
Output all real numbers with at most n digits of precision. The valid
range for n is between 0 and 31 (inclusive), and other values result
in an undefined behavior.
By default, the precision is 17, to correctly and losslessly encode
all IEEE 754 double precision floating point numbers.
Consider the following program:
#include <jansson.h>
int main() {
double test = 8.13;
json_t* msgtest = json_object();
json_object_set_new(msgtest, "test", json_real(test));
printf("%s\n", json_dumps(msgtest, JSON_REAL_PRECISION(3)));
}
It prints
{"test": 8.13}

Related

How can I convert a bitstring to the binary form in Julia

I am using bitstring to perform an xor operation on the ith bit of a string:
string = bitstring(string ⊻ 1 <<i)
However the result will be a string, so I cannot continue with other i.
So I want to know how do I convert a bitstring (of the form “000000000000000000000001001”) to (0b1001)?
Thanks
You can use parse to create an integer from the string, and then use string (alt. bitstring)to go the other way. Examples:
julia> str = "000000000000000000000001001";
julia> x = parse(UInt, str; base=2) # parse as UInt from input in base 2
0x0000000000000009
julia> x == 0b1001
true
julia> string(x; base=2) # stringify in base 2
"1001"
julia> bitstring(x) # stringify as bits (64 bits since UInt64 is 64 bits)
"0000000000000000000000000000000000000000000000000000000000001001"
don't use bitstring. You can either do the math with a BitVector or just a UInt. No reason to bring a String into it.

How to lossless convert a double to string and back in Octave

When saving a double to a string there is some loss of precision. Even if you use a very large number of digits the conversion may not be reversible, i.e. if you convert a double x to a string sx and then convert back you will get a number x' which may not be bitwise equal to x. This may cause some problem for instance when checking for differences in a battery of tests. One possibility is to use binary form (for instance the native Binary form, or HDF5) but I want to store the number in a text file, so I need a conversion to a string. I have a working solution but I ask if there is some standard for this or a better solution.
In C/C++ you could cast the double to some integer type like char* and then convert each byte to an hexa of length 2 with printf("%02x",c[j]). Then for instance PI would be converted to a string of length 16: 54442d18400921fb. The problem with this is that if you read the hexa you don get any idea of which number it is. So I would be interested in some mix for instance pi -> 3.14{54442d18400921fb}. The first part is a (probably low precision) decimal representation of the number (typically I would use a "%g" output conversion) and the string in braces is the lossless hexadecimal representation.
EDIT: I pass the code as an aswer
Following the ideas already suggested in the post I wrote the
following functions, that seem to work.
function s = dbl2str(d);
z = typecast(d,"uint32");
s = sprintf("%.3g{%08x%08x}\n",d,z);
endfunction
function d = str2dbl(s);
k1 = index(s,"{");
k2 = index(s,"}");
## Check that there is a balanced {} or not at all
assert((k1==0) == (k2==0));
if k1>0; assert(k2>k1); endif
if (k1==0);
## If there is not {hexa} part convert with loss
d = str2double(s);
else
## Convert lossless
ss = substr(s,k1+1,k2-k1-1);
z = uint32(sscanf(ss,"%8x",2));
d = typecast(z,"double");
endif
endfunction
Then I have
>> spi=dbl2str(pi)
spi = 3.14{54442d18400921fb}
>> pi2 = str2dbl(spi)
pi2 = 3.1416
>> pi2-pi
ans = 0
>> snan = dbl2str(NaN)
snan = NaN{000000007ff80000}
>> nan1 = str2dbl(snan)
nan1 = NaN
A further improvement would be to use other type of enconding, for
instance Base64 (as suggested by #CrisLuengo in a comment) that would
reduce the length of the binary part from 16 to 11 bytes.

Decimal to Binary Conversion Error

How do you fix the following problem converting from decimal to binary?
void tobinary(int bin) {
string binary = Convert.ToInt32(bin, 2);
}
These are the errors:
Error 2: Argument 2: cannot convert from 'int' to 'System.IFormatProvider' 42
Error 1: The best overloaded method match for 'System.Convert.ToInt32(object, System.IFormatProvider)' has some invalid arguments 42
see:
Decimal to binary conversion in c #
it should be:
void tobinary(int bin) {
string binary = Convert.ToString(bin, 2);}

Double-precision error using Dislin

I get the following error when trying to compile:
call qplot (Z, B, m + 1)
1
Error: Type mismatch in argument 'x' at (1); passed REAL(8) to REAL(4)
Everything seems to be in double precision so I can't help but think it is a Dislin error, especially considering that it appears with reference to a Dislin statement. What am I doing wrong? My code is the following:
program test
use dislin
integer :: i
integer, parameter :: n = 2
integer, parameter :: m = 5000
real (kind = 8) :: X(n + 1), Z(0:m), B(0:m)
X(1) = 1.D0
X(2) = 0.D0
X(3) = 2.D0
do i = 0, m
Z(i) = -1.D0 + (2.D0*i) / m
B(i) = f(Z(i))
end do
call qplot (Z, B, m + 1)
read(*,*)
contains
real (kind = 8) function f(t)
implicit none
real (kind = 8), intent(in) :: t
real (kind = 8), parameter :: pi = Atan(1.D0)*4.D0
f = cos(pi*t)
end function f
end program
From the DISLIN manual I read that qplot requires (single precision) floats:
QPLOT connects data points with lines.
The call is: CALL QPLOT (XRAY, YRAY, N) level 0, 1
or: void qplot (const float *xray, const float *yray, int n);
XRAY, YRAY are arrays that contain X- and Y-coordinates.
N is the number of data points.
So you need to convert Z and B to real:
call qplot (real(Z), real(B), m + 1)
Instead of using fixed numbers for the kind of numbers (which vary between compilers), please consider using the ISO_Fortran_env module and the pre-defined constants REAL32 and REAL64.
The qplot routine requires a default real. You can convert your data
call qplot(real(Z), real(B), m + 1)
I second the remark with kind = 8, it is very ugly, if you insist on 8 at least declare a constant
integer, parameter :: rp = 8
and use
real(rp) ::
As the first two answers explain, the standard versions of the dislin routines require single precision arguments. I find it most convenient to use these since I may have single or double arguments, using the real technique to convert the type of double variables. It seems unlikely that the lost precision will be perceptible on a graph. However, if you wish to work exclusively in double precision, there is an alternative set of routines. They have the same names, but take double precision arguments. To obtain them, link in the library "dislin_d".

How do you split a hex string into bytes?

Because hex is often used to represent things like RGBA color model data, I'm trying to find out how to go about taking a large hex string like 0x11AA22BB and split it into separate bytes. (So; 0x11, 0xAA, 0x22, and 0xBB chunks, essentially..). I know that each hex digit can be directly represented by four bits. But breaking up a chain of bits into smaller groups isn't something that I know how to do, either.
So, I'm sure that there is probably a simple answer to this. I'm sure it probably has something to do with casting it to an array of single bytes, or using bitwise operators or something, but I can't figure it out. I know that there is also an issue of endianness and how the bytes are organizes (RGBA, ARBG, ABGR, etc.), but right now I'm just looking to understand how to do the splitting so I can get a general understanding of how it works. I'm using C++ but I think that this might not necessarily be specific to that language.
So, to reiterate; How does one take a large hex string like 0x11AA22BB and split it into 0x11, 0xAA, 0x22, 0xBB?
The two ways are mod/div and shift/mask, but both are actually the same way.
mod/div:
num = 0x11aa22bb
while num > 0:
byte = num % 0x100
print hex(byte)
num //= 0x100
shift/mask:
num = 0x11aa22bb
while num > 0:
byte = num & 0xff
print hex(byte)
num >>=8
If you don't mind stepping away from C++ and you own a Linux machine, you could sed it using:
sed 's/0x/ /g' file.hex | sed 's/[a-fA-F0-9]{2}/0x& /g' | tr -s ' '
First command removes 0x prefixes and replaces them by spaces, the second command splits the sequences in bytes and adds the prefixes to each part. The last command squeezes white spaces.
Note that you can use this on stdin by removing the filename (file.hex).
void splitByte(unsigned char * split, unsigned int a,int quantBytes)
{
unsigned char aux;
int i;
for(i=0;i<quantBytes;i++)
{
split[i]=a&0x00FF;
a=(a>>8);
}
for(i=0;i<quantBytes-1;i++)
{
aux = split[i];
split[i] = split[quantBytes-i-1];
split[quantBytes-i-1] = aux;
}
}
In the main: unsigned char split[4]; splitByte(split, 0xffffffff, 4);strong text