I get memory leaks with TJSONTextReader. I more or less implemented the example from embarcadero. http://docwiki.embarcadero.com/CodeExamples/Tokyo/en/RTL.JSONReader
Is this a bug or am I missing anything?
Content of memoryleak_example.txt
{
"products": [{
"id": "14469654611354654",
"name": "productname_xyz",
"height": 111.550000,
}],
"products_feedback": null
}
This is my example code:
var
streamreader: TStreamReader;
jsonreader: TJSONTextReader;
arrstart, objstart: boolean;
height: double;
id, name: string;
begin
// initialize or compiler warning
arrstart := false;
objstart := false;
height := 0;
streamreader := TStreamReader.Create('C:\temp\memoryleak_example.txt', TEncoding.UTF8);
try
jsonreader := TJSONTextReader.Create(streamreader);
try
while jsonreader.Read do begin
case jsonreader.TokenType of
// product object in products array
TJsonToken.StartObject: if arrstart then objstart := true;
// check for prodcuts array
TJsonToken.StartArray: if (LowerCase(jsonreader.Path) = 'products') then arrstart := true;
TJsonToken.Float:
if objstart then
if jsonreader.Path.EndsWith('height', true) then
height := jsonreader.Value.AsExtended;
TJsonToken.String:
if objstart then begin
if jsonreader.Path.EndsWith('id', true) then
id := jsonreader.Value.AsString;
if jsonreader.Path.EndsWith('name', true) then
name := jsonreader.Value.AsString;
end;
// end product object
TJsonToken.EndObject:
if arrstart and objstart then begin
objstart := false;
if id <> '' then
Memo1.Lines.Add(Format('id: %s - name: %s - height: %g', [id, name, height]));
// reset values
id := '';
name := '';
height := 0;
end;
// end of products array
TJsonToken.EndArray: if (LowerCase(jsonreader.Path) = 'products') then arrstart := false;
end;
end;
finally
jsonreader.Free;
end;
finally
streamreader.Free;
end;
end;
This is the memory leak report on shutdown:
---------------------------
Unexpected Memory Leak
---------------------------
An unexpected memory leak has occurred. The unexpected small block leaks are:
1 - 12 bytes: Unknown x 6
13 - 20 bytes: UnicodeString x 1
21 - 28 bytes: UnicodeString x 2, Unknown x 1
29 - 36 bytes: UnicodeString x 1
45 - 52 bytes: TList<System.JSON.Types.TJsonPosition> x 6
85 - 92 bytes: Unknown x 5
---------------------------
OK
---------------------------
I use Delphi Seattle.
Embarcadero RAD Studio 10 Seattle Version 23.0.21418.4207
I found this discussion:
https://forums.embarcadero.com/thread.jspa?threadID=118014
I don't know if this is related.
--------------------------------2018/7/13 11:48:16--------------------------------
A memory block has been leaked. The size is: 12
This block was allocated by thread 0x18B0, and the stack trace (return addresses) at the time was:
41CA186 [System][System.#GetMem]
4241E94 [System.Generics.Defaults][System.Generics.Defaults.MakeInstance]
424256A [System.Generics.Defaults][System.Generics.Defaults.Comparer_Selector_Binary]
4242E84 [System.Generics.Defaults][System.Generics.Defaults._LookupVtableInfo]
EED8629 [System.JSON.Types][System.JSON][System.JSON.Types.{System.Generics.Defaults}TComparer<System.JSON.Types.TJsonPosition>.Default]
EEE1572 [System.JSON.Readers][System.JSON][System.JSON.Readers.{System.Generics.Collections}TList<System.JSON.Types.TJsonPosition>.Create]
EEDB91E [System.JSON.Readers][System.JSON][System.JSON.Readers.TJsonReader.GetPath]
The block is currently used for an object of class: Unknown
The allocation number is: 3192748
Current memory dump of 256 bytes starting at pointer address 79DC9440:
7C EF A0 04 01 00 00 00 10 00 00 00 FE C5 93 13 00 00 00 00 A0 93 DC 79 00 00 00 00 00 00 00 00
60 74 1E 04 00 00 00 00 B9 B7 30 00 86 A1 1C 04 94 1E 24 04 6A 25 24 04 84 2E 24 04 29 86 ED 0E
72 15 EE 0E 1E B9 ED 0E AD 16 EF 0E CC 0C EF 0E 73 54 EF 0E 04 81 20 04 B0 18 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 0C 00 00 00 00 00 00 00 DE 3A 6C EC 7C EF A0 04 01 00 00 00
10 00 00 00 21 C5 93 13 00 00 00 00 A0 93 DC 79 00 00 00 00 00 00 00 00 60 74 1E 04 00 00 00 00
C6 B7 30 00 86 A1 1C 04 94 1E 24 04 6A 25 24 04 84 2E 24 04 29 86 ED 0E 72 15 EE 0E 1E B9 ED 0E
E5 16 EF 0E CC 0C EF 0E 73 54 EF 0E 04 81 20 04 B0 18 00 00 00 00 00 00 00 00 00 00 00 00 00 00
| ï . . . . . . . . . þ Å “ . . . . . “ Ü y . . . . . . . .
` t . . . . . . ¹ · 0 . † ¡ . . ” . $ . j % $ . „ . $ . ) † í .
r . î . . ¹ í . . ï . Ì . ï . s T ï . . . ° . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . Þ : l ì | ï . . . . .
. . . . ! Å “ . . . . . “ Ü y . . . . . . . . ` t . . . . . .
Æ · 0 . † ¡ . . ” . $ . j % $ . „ . $ . ) † í . r . î . . ¹ í .
å . ï . Ì . ï . s T ï . . . ° . . . . . . . . . . . . . . .
Thank you Dalija Prasnikar.
I took the System.JSON.Readers.pas source of Tokyo and the memory leak is gone.
try
Result := TJsonPosition.BuildPath(Positions);
finally
if Positions <> FStack then
Positions.Free;
end;
This was missing in the function TJsonReader.GetPath: string; method.
Related
I have a Perl CGI script that is accessing Thai language, UTF-8 strings from a PostgreSQL DB and returning them to a web-based front end as JSON. The strings are fine when I get them from the DB and after I encode them as JSON (based on writing to a log file). However, when the client receives them they are corrupted, for example:
featurename "à¹\u0082รà¸\u0087à¹\u0080รียà¸\u0099วัà¸\u0094ภาษี"
Clearly some chars are being converted to Unicode escape sequences, but not all.
I could really use some suggestions as to how to solve this.
Simplified code snippet follows. I am using 'utf8' and 'utf8::all', as well as 'JSON'.
Thanks in advance for any help you can provide.
my $dataId = $cgi->param('dataid');
my $table = "uploadpoints";
my $sqlcommand = "select id,featurename from $table where dataid=$dataId;";
my $stmt = $gDbh->prepare($sqlcommand);
my $numrows = $stmt->execute;
# print JSON header
print <<EOM;
Content-type: application/json; charset="UTF-8"
EOM
my #retarray;
for (my $i = 0; ($i < $numrows); $i=$i+1)
{
my $hashref = $stmt->fetchrow_hashref("NAME_lc");
#my $featurename = $hashref->{'featurename'};
#logentry("Point $i feature name is: $featurename\n");
push #retarray,$hashref;
}
my $json = encode_json (\#retarray);
logentry("JSON\n $json");
print $json;
I have modified and simplified the example, now running locally rather than via browser invocation:
my $dataId = 5;
my $table = "uploadpoints";
my $sqlcommand = "select id,featurename from $table where dataid=$dataId and id=75;";
my $stmt = $gDbh->prepare($sqlcommand);
my $numrows = $stmt->execute;
my #retarray;
for (my $i = 0; ($i < $numrows); $i=$i+1)
{
my $hashref = $stmt->fetchrow_hashref("NAME_lc");
my $featurename = $hashref->{'featurename'};
print "featurename $featurename\n";
push #retarray,$hashref;
}
my $json = encode_json (\#retarray);
print $json;
Using hexdump as in Stefan's example, I've determined that the data as read from the database are already in UTF-8. It looks as though they are being re-encoded in the JSON encode method. But why?
The data in the JSON use exactly twice as many bytes as the original UTF-8.
perl testcase.pl | hexdump -C
00000000 66 65 61 74 75 72 65 6e 61 6d 65 20 e0 b9 82 e0 |featurename ....|
00000010 b8 a3 e0 b8 87 e0 b9 80 e0 b8 a3 e0 b8 b5 e0 b8 |................|
00000020 a2 e0 b8 99 e0 b9 81 e0 b8 88 e0 b9 88 e0 b8 a1 |................|
00000030 e0 b8 88 e0 b8 b1 e0 b8 99 e0 b8 97 e0 b8 a3 e0 |................|
00000040 b9 8c 0a 5b 7b 22 66 65 61 74 75 72 65 6e 61 6d |...[{"featurenam|
00000050 65 22 3a 22 c3 a0 c2 b9 c2 82 c3 a0 c2 b8 c2 a3 |e":"............|
00000060 c3 a0 c2 b8 c2 87 c3 a0 c2 b9 c2 80 c3 a0 c2 b8 |................|
00000070 c2 a3 c3 a0 c2 b8 c2 b5 c3 a0 c2 b8 c2 a2 c3 a0 |................|
00000080 c2 b8 c2 99 c3 a0 c2 b9 c2 81 c3 a0 c2 b8 c2 88 |................|
00000090 c3 a0 c2 b9 c2 88 c3 a0 c2 b8 c2 a1 c3 a0 c2 b8 |................|
000000a0 c2 88 c3 a0 c2 b8 c2 b1 c3 a0 c2 b8 c2 99 c3 a0 |................|
000000b0 c2 b8 c2 97 c3 a0 c2 b8 c2 a3 c3 a0 c2 b9 c2 8c |................|
000000c0 22 2c 22 69 64 22 3a 37 35 7d 5d |","id":75}]|
000000cb
Further suggestions? I tried using decode on the UTF string but got errors related to wide characters.
I did read the recommended answer from Tom Christianson, as well as his Unicode tutorials, but I will admit much of it went over my head. Also it seems my problem is considerably more constrained.
I did wonder whether retrieving the hash value and assigning it to a normal variable was doing some sort of auto-decoding or encoding. I do not really understand when Perl uses its internal character format as opposed to when it retains the external encoding.
UPDATE WITH SOLUTION
Turns out that since the string retrieved from the DB is already in UTF-8, I need to use 'to_json' rather than 'encode_json'. This fixed the problem. Learned a lot about Perl Unicode handling in the process though...
Also recommend: http://perldoc.perl.org/perluniintro.html
Very clear exposition.
NOTE: you should probably also read this answer, which makes my answer sub-par in comparison :-)
The problem is that you have to be sure in which format each string is, otherwise you'll get incorrect conversions. When handling UTF-8 a string can be in two formats:
raw UTF-8 encoded octet string, i.e. \x{100} represented as two octets 0xC4 0x80
internal Perl string representation, i.e. one Unicode character \x{100} (U+0100 Ā LATIN CAPITAL LETTER A WITH MACRON)
If I/O is involved you also need to know if the I/O layer does UTF-8 de/encoding or not. For terminal I/O you also have to consider if it understands UTF-8 or not. Both taken together can make it difficult to get meaningful debug printouts from your code.
If you Perl code needs to process UTF-8 strings after reading them from the source, you must make sure that they are in internal Perl format. Otherwise you'll get surprising result when you call code that expects Perl strings and not raw octet strings.
I try to show this in my example code:
#!/usr/bin/perl
use warnings;
use strict;
use JSON;
open(my $utf8_stdout, '>& :encoding(UTF-8)', \*STDOUT)
or die "can't reopen STDOUT as utf-8 file handle: $!\n";
my $hex = "C480";
print "${hex}\n";
my $raw = pack('H*', $hex);
print STDOUT "${raw}\n";
print $utf8_stdout "${raw}\n";
my $decoded;
utf8::decode($decoded = $raw);
print STDOUT ord($decoded), "\n";
print STDOUT "${decoded}\n"; # Wide character in print at...
print $utf8_stdout "${decoded}\n";
my $json = JSON->new->encode([$decoded]);
print STDOUT "${json}\n"; # Wide character in print at...
print $utf8_stdout "${json}\n";
$json = JSON->new->utf8->encode([$decoded]);
print STDOUT "${json}\n";
print $utf8_stdout "${json}\n";
exit 0;
Copy & paste from my terminal (which supports UTF-8). Look closely at the differences between the lines:
$ perl dummy.pl
C480
Ā
Ä
256
Wide character in print at dummy.pl line 21.
Ā
Ā
Wide character in print at dummy.pl line 25.
["Ā"]
["Ā"]
["Ā"]
["Ä"]
But compare this to the following, where STDOUT is not a terminal, but piped to another program. The hex dump always shows "c4 80", i.e. UTF-8 encoded.
$ perl dummy.pl | hexdump -C
Wide character in print at dummy.pl line 21.
Wide character in print at dummy.pl line 22.
Wide character in print at dummy.pl line 25.
Wide character in print at dummy.pl line 26.
00000000 43 34 38 30 0a c4 80 0a c4 80 0a 5b 22 c4 80 22 |C480.......[".."|
00000010 5d 0a 5b 22 c4 80 22 5d 0a 43 34 38 30 0a c4 80 |].[".."].C480...|
00000020 0a 32 35 36 0a c4 80 0a 5b 22 c4 80 22 5d 0a 5b |.256....[".."].[|
00000030 22 c4 80 22 5d 0a |".."].|
00000036
I am trying to execute basic C code of calculating factorial in WebAssembly and when I am loading WASM file in Google Chrome ( 57.0.2987.98) I am getting
CompileError: WebAssembly.compile():
Wasm decoding failedResult = expected magic word 00 61 73 6d,
found 30 30 36 31 #+0`
C Code :
double fact(int i) {
long long n = 1;
for (;i > 0; i--) {
n *= i;
}
return (double)n;
}
WAST :
(module
(table 0 anyfunc)
(memory $0 1)
(export "memory" (memory $0))
(export "_Z4facti" (func $_Z4facti))
(func $_Z4facti (param $0 i32) (result f64)
(local $1 i64)
(local $2 i64)
(block $label$0
(br_if $label$0
(i32.lt_s
(get_local $0)
(i32.const 1)
)
)
(set_local $1
(i64.add
(i64.extend_s/i32
(get_local $0)
)
(i64.const 1)
)
)
(set_local $2
(i64.const 1)
)
(loop $label$1
(set_local $2
(i64.mul
(get_local $2)
(tee_local $1
(i64.add
(get_local $1)
(i64.const -1)
)
)
)
)
(br_if $label$1
(i64.gt_s
(get_local $1)
(i64.const 1)
)
)
)
(return
(f64.convert_s/i64
(get_local $2)
)
)
)
(f64.const 1)
)
)
WASM Compiled Code :
0061 736d 0100 0000 0186 8080 8000 0160
017f 017c 0382 8080 8000 0100 0484 8080
8000 0170 0000 0583 8080 8000 0100 0106
8180 8080 0000 0795 8080 8000 0206 6d65
6d6f 7279 0200 085f 5a34 6661 6374 6900
000a c380 8080 0001 bd80 8080 0001 027e
0240 2000 4101 480d 0020 00ac 4201 7c21
0142 0121 0203 4020 0220 0142 7f7c 2201
7e21 0220 0142 0155 0d00 0b20 02b9 0f0b
4400 0000 0000 00f0 3f0b
`
Code executed in Chrome :
async function load(){
let binary = await fetch('https://flinkhub.com/t.wasm');
let bytes = await binary.arrayBuffer();
console.log(bytes);
let module = await WebAssembly.compile(bytes);
let instance = await WebAssembly.Instance(module);
}
load().then(instance => console.log(instance.exports.fact(3)));
Can anyone help me out, I have been stuck on this for a whole day and not able to understand what is going wrong.
I used WebAssembly Explorer to to get the WAST and WASM code.
Using the WebAssembly Explorer's download capability you reference, I get the following file (as seen with hexdump):
0000000 00 61 73 6d 01 00 00 00 01 86 80 80 80 00 01 60
0000010 01 7f 01 7c 03 82 80 80 80 00 01 00 04 84 80 80
0000020 80 00 01 70 00 00 05 83 80 80 80 00 01 00 01 06
0000030 81 80 80 80 00 00 07 95 80 80 80 00 02 06 6d 65
0000040 6d 6f 72 79 02 00 08 5f 5a 34 66 61 63 74 69 00
0000050 00 0a c3 80 80 80 00 01 bd 80 80 80 00 01 02 7e
0000060 02 40 20 00 41 01 48 0d 00 20 00 ac 42 01 7c 21
0000070 01 42 01 21 02 03 40 20 02 20 01 42 7f 7c 22 01
0000080 7e 21 02 20 01 42 01 55 0d 00 0b 20 02 b9 0f 0b
0000090 44 00 00 00 00 00 00 f0 3f 0b
000009a
That's a valid .wasm binary which starts with the magic 00 61 73 6d a.k.a. \0asm. According to the error message you get, your file starts with 30 30 36 31 which isn't valid.
Double-check the .wasm file you have.
Decoding 30 30 36 31 as ASCII gives 0061 which seems to be your problem: you're loading the textual version of your hex file. Sure enough, the URL you provide (https://flinkhub.com/t.wasm) contains the following content as-is (I didn't hexdump it! It's ASCII):
0061 736d 0100 0000 0186 8080 8000 0160
017f 017c 0382 8080 8000 0100 0484 8080
8000 0170 0000 0583 8080 8000 0100 0106
8180 8080 0000 0795 8080 8000 0206 6d65
6d6f 7279 0200 085f 5a34 6661 6374 6900
000a c380 8080 0001 bd80 8080 0001 027e
0240 2000 4101 480d 0020 00ac 4201 7c21
0142 0121 0203 4020 0220 0142 7f7c 2201
7e21 0220 0142 0155 0d00 0b20 02b9 0f0b
4400 0000 0000 00f0 3f0b
I'm guessing you overwrote the file saved from the Explorer.
I solved this by setting GOARCH and GOOS environment variable correctly in zsh shell of mac before generating wasm object. Looks like go compiler does not recognize both these variables unless you explosively export them as a global variable in the parent shell. I simply exported both variables and run the compiler.
% export GOARCH=wasm
% export GOOS=js
% go build -o hello.wasm hello.go
I am not sure about your system, but I am using react and esbuild.wasm where while running the below code I was getting error.
const service = await esbuild.startService({
worker: true,
wasmURL: "/esbuild.wasm/esbuild.wasm"
})
esbuild.wasm is in my public folder along with node_module.
So I corrected the url which is wasmURL: "https://unpkg.com/esbuild-wasm#0.8.27/esbuild.wasm" and now it is working.
30 30 36 31 is the hex dump of the string "0061" which is the start of the hex dump of your wasm binary. Did you somehow fetch the textual hexdump instead of the actual binary?
In general, this error means that instead of the binary .wasm file your browser received something else. That can be a error page generated by the web server or the same .wasm but corrupted somehow. I recommend to open your browser Developer Tools, go to Network tab, Refresh the page and look at the .wasm resource request and response.
I'm trying to connect to the Safecom TA-810 (badge/registration system) to automate the process of calculating how long employee's have worked each day. Currently this is done by:
Pulling the data into the official application
Printing a list of all 'registrations'
Manually entering the values from the printed lists into our HR application
This is a job that can take multiple hours which we'd like to see automated. So far the official tech support has been disappointing and refused to share any details.
Using wireshark I have been capturing the UDP transmissions and have pretty much succeeded in understanding how the protocol is built up. I'm only having issues with what i suppose is a CRC field. I don't know how it is calculated (CRC type and parameters) and using which fields ...
This is how a message header looks like:
D0 07 71 BC BE 3B 00 00
D0 07 - Message type
71 BC - This i believe is the CRC
BE 3B - Some kind of session identifier. Stays the same for every message after the initial message (initial message has '00 00' as value)
00 00 - Message number. '01 00', '02 00', '03 00'
Some examples:
Header only examples
E8 03 17 FC 00 00 00 00 -> initial request (#0, no session nr)
D0 07 71 BC BE 3B 00 00 -> Initial response (#0, device sends a session nr)
4C 04 EF BF BE 3B 06 00 -> Message #6, still using the same session # as the initial response
Larger example, which has data
0B 00 07 E1 BE 3B 01 00 7E 45 78 74 65 6E 64 46 6D 74
I've also been trying to figure this out by reading the disassembled code from the original application. The screenshot below happens before the socket.sendto and seems to be related.
Any help will be extremely appreciated.
EDIT: Made some success with debugging the application using ollydbg. The CRC appears in register (reversed) EDX at the selected line in the following screenshot.
Take a look at CRC RevEng. If you can correctly identify the data that the CRC is operating on and the location of the CRC, you should be able to determine the CRC parameters. If it is a CRC.
I've managed to create a php script that does the CRC calculation by debugging the application using OllyDbg.
The CRC is calculated by Adding up every 2 bytes (every short). if the result is larger than a short, the 'most significant short' is added to the 'least significant short' until the result fits in a short. Finally, the CRC (short) is inverted.
I'll add my php script for completeness:
<?php
function CompareHash($telegram)
{
$telegram = str_replace(" ", "", $telegram);
$telegram_crc = substr($telegram, 4, 4);
$telegram = str_replace($telegram_crc, "0000", $telegram);
echo "Telegram: ", $telegram, ', Crc: ', $telegram_crc, ' (', hexdec($telegram_crc), ')<br />';
$crc = 0;
$i = 0;
while ($i < strlen($telegram))
{
$short = substr($telegram, $i, 4);
if (strlen($short) < 4) $short = $short . '00';
$crc += hexdec($short);
$i += 4;
}
echo "Crc: ", $crc, ', inverse: ', ~$crc;
// Region "truncate CRC to Int16"
while($crc > hexdec('FFFF'))
{
$short = $crc & hexdec ('FFFF');
// Region "unsigned shift right by 16 bits"
$crc = $crc >> 16;
$crc = $crc & hexdec ('FFFF');
// End region
$crc = $short + $crc;
}
// End region
// Region "invert Int16"
$crc = ~$crc;
$crc = $crc & hexdec ('FFFF');
// End region
echo ', shifted ', $crc;
if (hexdec($telegram_crc) == $crc)
{
echo "<br />MATCH!!! <br />";
}
else
{
echo "<br />failed .... <br />";
}
}
$s1_full = "E8 03 17 FC 00 00 00 00";
$s2_full = "D0 07 71 BC BE 3B 00 00";
$s3_full = "D0 07 4E D4 E1 23 00 00";
$s4_full = "D0 07 35 32 BE 3B 07 00 7E 44 65 76 69 63 65 4E 61 6D 65 3D 54 41 38 31 30 00";
$s5_full = "0B 00 39 6C BE 3B 05 00 7E 52 46 43 61 72 64 4F 6E";
CompareHash($s1_full);
CompareHash($s2_full);
CompareHash($s3_full);
CompareHash($s4_full);
CompareHash($s5_full);
?>
Thanks for the feedback!
I'm sending a COM_EXECUTE_STMT message and the server always returns an:
Error 1048 - #23000 - Column 'number_tinyint' cannot be null
The query is like this:
insert into numbers (
number_tinyint,
number_smallint,
number_mediumint,
number_int,
number_bigint,
number_decimal,
number_float,
number_double
) values
(
?, 679, 778, 875468, 100007654, 198.657809, 432.8, ?)
And what I send in is:
0: 18 00 00 00 17 01 00 00 . . . . . . . .
1: 00 00 01 00 00 00 00 00 . . . . . . . .
2: 01 01 05 0a 29 5c 8f c2 . . . . ) \ . .
3: f5 b0 58 40 . . X #
And simplified for reading:
18 00 00 - size
00 - sequence
17 - type
01 00 00 00 - statement id
00 - flags
01 00 00 00 - iteration-count
00 00 - null bitmap
01 - new params bound flag
01 - byte type
05 - double type
0a - byte value - 10
29 5c 8f c2 f5 b0 58 40 - double value
The statement parameters are 10 (for the tinyint column) and 98.765 (for the double column). From what I can see the message is encoded correctly but it always fails for some reason (at least from what the documentation says)
Am I missing something in here?
From the documentation to which you've linked:
payload:
[ deletia ]
n NULL-bitmap, length: (num-params+7)/8
Therefore, with two parameters in your case, the NULL-bitmap should have a length of (2+7)/8 = 1 byte, whereas you currently have a 2-byte bitmap.
If I have serialized data like:
> a <- 1:5
> a
[1] 1 2 3 4 5
> b <- serialize(a,NULL)
> b
[1] 58 0a 00 00 00 02 00 02 0f 02 00 02 03 00 00 00 00 0d 00 00 00 05 00 00 00 01 00 00 00 02 00 00 00 03 00 00 00 04 00 00 00 05
> b[1]
[1] 58
> b[8]
[1] 02
How can I put that serialized data into a MySQL table? I have other info there also. I read that it can be done as blob, but I don't know how it works. I am using RMySQL. I have tried:
dbGetQuery(con, "INSERT INTO table(",b," info, moreInfo, otherStuff, more, date )")
but it won't work.
If I use
query <- paste ("INSERT INTO table(",b," info, moreInfo, otherStuff, more, date )")
dbGetQuery(con,query)
it still won't work.
try this:
library(RODBC)
dt=data.table(a=sample(10),b=sample(10)*10)
sqlSave(con, dt, tablename='sampletablename') # overwrites existing sampletablename table
sqlSave(con, dt, tablename='sampletablename', append=TRUE) # append instead of overwrite