I am having a device added to the rocket chip, it has its control & status registers and also an internal ram. To have the ability to access it with software I have added it into the regmap() in the next way :
val mem = Module(new SinglePortMemory(32,128, 7))
// init //
mem.io.re := false.B
mem.io.we := false.B
mem.io.addr := 0.U
mem.io.din := 0.S
val dout = RegNext(mem.io.dout)
regmap((
Seq(
0x0 -> Seq(RegField(1,enable,RegFieldDesc("enable","This bit enable my device")))
++ Seq.tabulate(128) {j =>
(0x50 + (j*4)) -> Seq(RegField(32,
RegReadFn(ready => {
when(ready) {
mem.io.addr := j.asUInt
mem.io.re := true.B
}
(true.B,dout.asUInt)
}),
RegWriteFn((valid,data) => {
when(valid) {
mem.io.addr := j.asUInt
mem.io.we := true.B
mem.io.din := data.asSInt
}
true.B
}),
RegFieldDesc(s"mem_${j}","")
))}
):_*)
Now this works , but I'm not sure if that is the best way to do that (I mean adding the memory to regmap). Can someone advise on a different / better way to do that ?
rocketchip has some built in memories for TileLink and AXI (AHB/APB I don't recall, but can use the others as an example and/or add converters)
https://github.com/chipsalliance/rocket-chip/blob/master/src/main/scala/tilelink/SRAM.scala
You can then connect the new TLRAM to a Xbar in addition to your normal memory mapped registers.
Related
Again I have some questions that are mainly due to my inexperience.
I am designing a memory mapped accelerator, the idea is that the accelerator will have 1 Data input, 1 Data output, and a control input.
And I want all this connections to be memory mapped and connected via FIFOs.
I have already design a memory mapped accelerator before, but it just had 1 input and 1 output as the example given (GenericFIR).
If we check the example of the GenericFIR example we can see how to connect 1 Input and 1 Output:
// DOC include start: GenericFIRBlock chisel
abstract class GenericFIRBlock[D, U, EO, EI, B<:Data, T<:Data:Ring]
(
genIn: T,
genOut: T,
coeffs: Seq[T]
)(implicit p: Parameters) extends DspBlock[D, U, EO, EI, B] {
val streamNode = AXI4StreamIdentityNode()
val mem = None
lazy val module = new LazyModuleImp(this) {
require(streamNode.in.length == 1)
require(streamNode.out.length == 1)
val in = streamNode.in.head._1
val out = streamNode.out.head._1
// instantiate generic fir
val fir = Module(new GenericFIR(genIn, genOut, coeffs))
// Attach ready and valid to outside interface
in.ready := fir.io.in.ready
fir.io.in.valid := in.valid
fir.io.out.ready := out.ready
out.valid := fir.io.out.valid
// cast UInt to T
fir.io.in.bits := in.bits.data.asTypeOf(GenericFIRBundle(genIn))
// cast T to UInt
out.bits.data := fir.io.out.bits.asUInt
}
}
// DOC include end: GenericFIRBlock chisel
But how do we modify this for the case in which the GenericFIR has two InputBundles and two OutputBundles? Lets say in1, in2, out1, out2 and all of them with their Ready/Valid signals (Decoupled).
Also how do we connect the StreamNodes after?
Thanks!
I wrote code that behaves weird and slow and I can't understand why.
What I'm trying to do is to download data from bigquery (using a query as an input) to a CSV file, then create a url link with this CSV so people can download it as a report.
I'm trying to optimize the process of writing the CSV as it takes some time and have some weird behavior.
The code iterates over bigquery results and pass each result to a channel for future parsing/writing using golang encoding/csv package.
This is the relevant parts with some debugging
func (s *Service) generateReportWorker(ctx context.Context, query, reportName string) error {
it, err := s.bigqueryClient.Read(ctx, query)
if err != nil {
return err
}
filename := generateReportFilename(reportName)
gcsObj := s.gcsClient.Bucket(s.config.GcsBucket).Object(filename)
wc := gcsObj.NewWriter(ctx)
wc.ContentType = "text/csv"
wc.ContentDisposition = "attachment"
csvWriter := csv.NewWriter(wc)
var doneCount uint64
go backgroundTimer(ctx, it.TotalRows, &doneCount)
rowJobs := make(chan []bigquery.Value, it.TotalRows)
workers := 10
wg := sync.WaitGroup{}
wg.Add(workers)
// start wrokers pool
for i := 0; i < workers; i++ {
go func(c context.Context, num int) {
defer wg.Done()
for row := range rowJobs {
records := make([]string, len(row))
for j, r := range records {
records[j] = fmt.Sprintf("%v", r)
}
s.mu.Lock()
start := time.Now()
if err := csvWriter.Write(records); err != {
log.Errorf("Error writing row: %v", err)
}
if time.Since(start) > time.Second {
fmt.Printf("worker %d took %v\n", num, time.Since(start))
}
s.mu.Unlock()
atomic.AddUint64(&doneCount, 1)
}
}(ctx, i)
}
// read results from bigquery and add to the pool
for {
var row []bigquery.Value
if err := it.Next(&row); err != nil {
if err == iterator.Done || err == context.DeadlineExceeded {
break
}
log.Errorf("Error loading next row from BQ: %v", err)
}
rowJobs <- row
}
fmt.Println("***done loop!***")
close(rowJobs)
wg.Wait()
csvWriter.Flush()
wc.Close()
url := fmt.Sprintf("%s/%s/%s", s.config.BaseURL s.config.GcsBucket, filename)
/// ....
}
func backgroundTimer(ctx context.Context, total uint64, done *uint64) {
ticker := time.NewTicker(10 * time.Second)
go func() {
for {
select {
case <-ctx.Done():
ticker.Stop()
return
case _ = <-ticker.C:
fmt.Printf("progress (%d,%d)\n", atomic.LoadUint64(done), total)
}
}
}()
}
bigquery Read func
func (c *Client) Read(ctx context.Context, query string) (*bigquery.RowIterator, error) {
job, err := c.bigqueryClient.Query(query).Run(ctx)
if err != nil {
return nil, err
}
it, err := job.Read(ctx)
if err != nil {
return nil, err
}
return it, nil
}
I run this code with query that have about 400,000 rows. the query itself take around 10 seconds, but the whole process takes around 2 minutes
The output:
progress (112346,392565)
progress (123631,392565)
***done loop!***
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
worker 3 took 1m16.728143875s
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
worker 3 took 1m13.525662666s
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
worker 4 took 1m17.576536375s
progress (392565,392565)
You can see that writing first 112346 rows was fast, then for some reason worker 3 took 1.16minutes (!!!) to write a single row, which cause the other workers to wait for the mutex to be released, and this happened again 2 more times, which caused the whole process to take more than 2 minutes to finish.
I'm not sure whats going and how can I debug this further, why I have this stalls in the execution?
As suggested by #serge-v, you can write all the records to a local file and then transfer the file as a whole to GCS. To make the process happen in a shorter time span you can split the files into multiple chunks and can use this command : gsutil -m cp -j where
gsutil is used to access cloud storage from command line
-m is used to perform a parallel multi-threaded/multi-processing copy
cp is used to copy files
-j applies gzip transport encoding to any file upload. This also saves network bandwidth while leaving the data uncompressed in Cloud Storage.
To apply this command in your go Program you can refer to this Github link.
You could try implementing profiling in your Go program. Profiling will help you analyze the complexity. You can also find the time consumption in the program through profiling.
Since you are reading millions of rows from BigQuery you can try using the BigQuery Storage API. It Provides faster access to BigQuery-managed Storage than Bulk data export. Using BigQuery Storage API rather than the iterators that you are using in Go program can make the process faster.
For more reference you can also look into the Query Optimization techniques provided by BigQuery.
Notice:
Original post title
Why multithreaded JSON parser from DWScript does not scale with number of threads?
was changed because this problem is not related to processing JSON data with DWScript.
The problem is in default memory manager in Delphi XE2 to XE7 ( tested were XE2 and trial XE7 ), but problem appeared first in such type of application.
I have multithreaded Win32/Win64 vcl application which process JSON data in Delphi XE2.
Each thread parses JSON data using TdwsJSONValue.ParseString(sJSON) from DWScript, reads values using DWScript methods and stores result as records.
For testing purposes I process same JSON data in each thread.
Single thead run takes N seconds within thread to process data. Increasing number of threads to M lineary (approx. M * N) increases time within single thread necessary to process same data.
In result there is no speed improvment. Other parts of this applications ( JSON data delivery, storing results in target environment ) - scale as expected.
What could be a reason ? Any ideas appreciated.
Supplemental information:
Tested on Win7/32 and Win7/64, Win8/64 from 2-core to 12-core (w/w-out HT) systems
DWScript was choosen as fastest available (tested a bunch, among them: Superobject, build-in Delphi). SO behaves similar as JSON unit from DWS.
Below is complete console app illustrating the problem. To run it we need sample json data available here: https://www.dropbox.com/s/4iuv87ytpcdugk6/json1.zip?dl=0 This file contains data json1.dat for first thread. For threads up to 16 just copy json1.dat to json2.dat...json16.dat.
Program and data shoule be in the same folder. To run: convert.exe N, where N is number of threads.
Program writes time of execution in msecs to stout - spent in thread, time of parsing data and time of releasing (Destroy) TdwsJSONValue object.
Statement _dwsjvData.Destroy; does not scale.
program Convert;
{$APPTYPE CONSOLE}
{$R *.res}
uses
System.SysUtils,
System.Diagnostics,
System.Classes,
dwsJSON in 'dwsJSON.pas',
dwsStrings in 'dwsStrings.pas',
dwsUtils in 'dwsUtils.pas',
dwsXPlatform in 'dwsXPlatform.pas';
type
TWorkerThread = class (TThread)
private
_iUid: Integer;
_swWatch: TStopwatch;
_lRunning: Boolean;
_sFileJSonData: String;
_fJsonData: TextFile;
protected
constructor Create (AUid: Integer);
procedure Execute; override;
published
property Running: Boolean read _lRunning;
end;
TConverter = class (TObject)
private
_swWatch0, _swWatch1, _swWatch2: TStopwatch;
_dwsjvData: TdwsJSONValue;
protected
constructor Create;
destructor Destroy; override;
function Calculate (AUid: Integer; AJSonData: String; var AParse, ADestroy: Integer): Integer;
end;
const
MAX_THREADS = 16;
var
iHowMany: Integer;
athWorker: array [1..MAX_THREADS] of Pointer;
aiElapsed: array [1..MAX_THREADS] of Integer;
aiElapsedParse: array [1..MAX_THREADS] of Integer;
aiElapsedDestroy: array [1..MAX_THREADS] of Integer;
aiFares: array [1..MAX_THREADS] of Integer;
swWatchT, swWatchP: TStopwatch;
constructor TWorkerThread.Create (AUid: Integer);
begin
inherited Create (True);
_iUid := AUid;
_swWatch := TStopwatch.Create;
_sFileJSonData := ExtractFilePath (ParamStr (0)) + 'json' + Trim (IntToStr (_iUid)) + '.dat';
_lRunning := False;
Suspended := False;
end;
procedure TWorkerThread.Execute;
var
j: Integer;
sLine: String;
slLines: TStringList;
oS: TConverter;
begin
_lRunning := True;
oS := TConverter.Create;
slLines := TStringList.Create;
System.AssignFile (_fJsonData, _sFileJSonData);
System.Reset (_fJsonData);
j := 0;
repeat
System.Readln (_fJsonData, sLine);
slLines.Add (sLine);
Inc (j);
until (j = 50);
// until (System.Eof (_fJsonData));
System.Close (_fJsonData);
Sleep (1000);
_swWatch.Reset;
_swWatch.Start;
aiFares [_iUid] := 0;
aiElapsedParse [_iUid] := 0;
aiElapsedDestroy [_iUid] := 0;
for j := 1 to slLines.Count do
aiFares [_iUid] := aiFares [_iUid] + oS.Calculate (_iUid, slLines.Strings [j - 1], aiElapsedParse [_iUid], aiElapsedDestroy [_iUid]);
_swWatch.Stop;
slLines.Free;
os.Destroy;
aiElapsed [_iUid] := _swWatch.ElapsedMilliseconds;
_lRunning := False;
end;
constructor TConverter.Create;
begin
inherited Create;
_swWatch0 := TStopwatch.Create;
_swWatch1 := TStopwatch.Create;
_swWatch2 := TStopwatch.Create;
end;
destructor TConverter.Destroy;
begin
inherited;
end;
function TConverter.Calculate (AUid: Integer; AJSonData: String; var AParse, ADestroy: Integer): Integer;
var
jFare, jTotalFares, iElapsedParse, iElapsedDestroy, iElapsedTotal: Integer;
begin
_swWatch0.Reset;
_swWatch0.Start;
_swWatch1.Reset;
_swWatch1.Start;
_dwsjvData := TdwsJSONValue.ParseString (AJSonData);
_swWatch1.Stop;
iElapsedParse := _swWatch1.ElapsedMilliseconds;
if (_dwsjvData.ValueType = jvtArray) then
begin
_swWatch2.Reset;
_swWatch2.Start;
jTotalFares := _dwsjvData.ElementCount;
for jFare := 0 to (jTotalFares - 1) do
if (_dwsjvData.Elements [jFare].ValueType = jvtObject) then
begin
_swWatch1.Reset;
_swWatch1.Start;
_swWatch1.Stop;
end;
end;
_swWatch1.Reset;
_swWatch1.Start;
_dwsjvData.Destroy;
_swWatch1.Stop;
iElapsedDestroy := _swWatch1.ElapsedMilliseconds;
_swWatch0.Stop;
iElapsedTotal := _swWatch0.ElapsedMilliseconds;
Inc (AParse, iElapsedParse);
Inc (ADestroy, iElapsedDestroy);
result := jTotalFares;
end;
procedure MultithreadStart;
var
j: Integer;
begin
for j := 1 to iHowMany do
if (athWorker [j] = nil) then
begin
athWorker [j] := TWorkerThread.Create (j);
TWorkerThread (athWorker [j]).FreeOnTerminate := False;
TWorkerThread (athWorker [j]).Priority := tpNormal;
end;
end;
procedure MultithreadStop;
var
j: Integer;
begin
for j := 1 to MAX_THREADS do
if (athWorker [j] <> nil) then
begin
TWorkerThread (athWorker [j]).Terminate;
TWorkerThread (athWorker [j]).WaitFor;
TWorkerThread (athWorker [j]).Free;
athWorker [j] := nil;
end;
end;
procedure Prologue;
var
j: Integer;
begin
iHowMany := StrToInt (ParamStr (1));
for j := 1 to MAX_THREADS do
athWorker [j] := nil;
swWatchT := TStopwatch.Create;
swWatchT.Reset;
swWatchP := TStopwatch.Create;
swWatchP.Reset;
end;
procedure RunConvert;
function __IsRunning: Boolean;
var
j: Integer;
begin
result := False;
for j := 1 to MAX_THREADS do
result := result or ((athWorker [j] <> nil) and TWorkerThread (athWorker [j]).Running);
end;
begin
swWatchT.Start;
MultithreadStart;
Sleep (1000);
while (__isRunning) do
Sleep (500);
MultithreadStop;
swWatchT.Stop;
Writeln (#13#10, 'Total time:', swWatchT.ElapsedMilliseconds);
end;
procedure Epilogue;
var
j: Integer;
begin
for j := 1 to iHowMany do
Writeln ( #13#10, 'Thread # ', j, ' tot.time:', aiElapsed [j], ' fares:', aiFares [j], ' tot.parse:', aiElapsedParse [j], ' tot.destroy:', aiElapsedDestroy [j]);
Readln;
end;
begin
try
Prologue;
RunConvert;
Epilogue;
except
on E: Exception do
Writeln (E.ClassName, ': ', E.Message);
end;
end.
Have you tried my scaleable memory manager? Because Delphi (with fastmm internally) does not scale well with strings and other memory related stuff:
https://scalemm.googlecode.com/files/ScaleMM_v2_4_1.zip
And you could also try both profiler modes of my profiler to see which part is the bottleneck:
https://code.google.com/p/asmprofiler/
I did a (re)test of the FastCode MM Challenge, and the results were not that good for TBB (also out of memory exception in block downsize test).
In short: ScaleMM2 and Google TCmalloc are the fastest in this complex test, Fastmm and ScaleMM2 use the least memory.
Average Speed Performance: (Scaled so that the winner = 100%)
XE6 : 70,4
TCmalloc : 89,1
ScaleMem2 : 100,0
TBBMem : 77,8
Average Memory Performance: (Scaled so that the winner = 100%)
XE6 : 100,0
TCmalloc : 29,6
ScaleMem2 : 75,6
TBBMem : 38,4
FastCode Challenge: https://code.google.com/p/scalemm/source/browse/#svn%2Ftrunk%2FChallenge
TBB 4.3: https://www.threadingbuildingblocks.org/download
The solution is exchange default Delphi XE2 or XE7 memory manager with Intel® Threading Building Blocks memory manager. In example application it scales ca. lineary with number of threads up to 16 when app is 64 bits.
update: with assumption that number of threads running is less than number of cores
This was tested on machines from 2cores/4ht to 12cores/24ht running KVM virtualized Windows 7 with 124GB RAM
Interesting thing is virtualizing Win 7. memory allocation and deallocation is from 2 x faster as in native Win 7.
Conclusion: if you do a lot of memory allocation / deallocation operations of 10kB-10MB blocks in threads of multithreaded ( more than 4-8 threads) application - use only memory manager from Intel.
#André: thanks for tip pointing me to right direction!
Here is unit with TBB memory manager taken for tests, it has to appear as 1st on unit list in main project file .dpr
unit TBBMem;
interface
function ScalableGetMem (ASize: NativeInt): Pointer; cdecl; external 'tbbmalloc' name 'scalable_malloc';
procedure ScalableFreeMem (APtr: Pointer); cdecl; external 'tbbmalloc' name 'scalable_free';
function ScalableReAlloc (APtr: Pointer; Size: NativeInt): Pointer; cdecl; external 'tbbmalloc' name 'scalable_realloc';
implementation
Function TBBGetMem (ASize: Integer): Pointer;
begin
result := ScalableGetMem (ASize);
end;
Function TBBFreeMem (APtr: Pointer): Integer;
begin
ScalableFreeMem (APtr);
result := 0;
end;
Function TBBReAllocMem (APtr: Pointer; ASize: Integer): Pointer;
begin
result := ScalableRealloc (APtr, ASize);
end;
const
TBBMemoryManager: TMemoryManager = ( GetMem: TBBGetmem;
FreeMem: TBBFreeMem;
ReAllocMem: TBBReAllocMem; );
var
oldMemoryManager: TMemoryManager;
initialization
GetMemoryManager (oldMemoryManager);
SetMemoryManager (TBBMemoryManager);
finalization
SetMemoryManager (oldMemoryManager);
end.
so I have the following test Go code which is designed to read from a binary file through stdin, and send the data read to a channel, (where it would then be processed further). In the version I've given here, it only reads the first two values from stdin, although that's fine as far as showing the problem is concerned.
package main
import (
"fmt"
"io"
"os"
)
func input(dc chan []byte) {
data := make([]byte, 2)
var err error
var n int
for err != io.EOF {
n, err = os.Stdin.Read(data)
if n > 0 {
dc <- data[0:n]
}
}
}
func main() {
dc := make(chan []byte, 1)
go input(dc)
fmt.Println(<-dc)
}
To test it, I first build it using go build, and then send data to it using the command-
./inputtest < data.bin
The data I am using currently to test is just random binary data created using the openssl command.
The problem I am having is that it misses the first values from Stdin, and only gives the second and greater values. I think this is to do with the channel, as the same script with the channel removed produces the correct data. Has anyone come across this before? For example, I get the following output when running this command-
./inputtest < data.bin
[36 181]
Whereas I should be getting-
./inputtest < data.bin
[72 218]
(The binary data is the same in both instances.)
You're overwriting your buffer on every read and you've got a channel buffer, so you'll lose data every time there's space in the channel.
Try something like this (not tested, written on tablet, etc...):
import "os"
func input(dc chan []byte) error {
defer close(dc)
for {
data := make([]byte, 2)
n, err := os.Stdin.Read(data)
if n > 0 {
dc <- data[0:n]
}
if err != nil {
return err
}
}
return nil
}
I am trying to make a Android/Ios app that connects to MySQL through a DataSnap server.
I want to make this as a Thread. It works fine when I don't use a Thread.
In some articles it is mentioned that when using COM objects in a Thread it is importen to use CoInitialize and CoUninitialize. (But I don't get this to work)
Is this correct for FireMonkey app Android/Ios?
My Thread code:
Constructor TDMThread.Create(CreateSuspended: Boolean; ServerClassName, ProviderName:String; var ds:TclientDataset; n1:String=''; p1:String=''; n2:String=''; p2:String=''; n3:String='';p3:String='';n4:String='';p4:String='');
begin
Inherited Create(CreateSuspended);
FreeOnTerminate := False;
iServerClassName:=ServerClassName;
iProvName := ProviderName;
ip1 := p1;
in1 := n1;
ip2 := p2;
in2 := n2;
ip3 := p3;
in3 := n3;
ip4 := p4;
in4 := n4;
OutDS := ds;
end;
Destructor TDMThread.Destroy;
begin
inherited Destroy;
end;
procedure TDMThread.Execute;
var
par1,par2,par3,par4:Tparam;
begin
SQLConnection1 := TSQLConnection.Create(Nil);
SQLConnection1.DriverName := 'DataSnap';
SQLConnection1.Params.Values['HostName'] := 'localhost';
SQLConnection1.Params.Values['Port'] := '211';
SQLConnection1.Params.Values['DSAuthenticationPassword'] := '******';
SQLConnection1.Params.Values['DSAuthenticationUser'] := '*******';
SQLConnection1.Params.Values['DriverUnit'] := 'Data.DBXDataSnap';
SQLConnection1.Params.Values['CommunicationProtocol'] := 'tcp/ip';
SQLConnection1.Params.Values['DatasnapContext'] := 'datasnap/';
SQLConnection1.Params.Values['DriverAssemblyLoader'] := 'Borland.Data.TDBXClientDriverLoader,Borland.Data.DbxClientDriver,Version=19.0.0.0,Culture=neutral,PublicKeyToken=91d62ebb5b0d1b1b';
DSProviderConnection1:=TDSProviderConnection.Create(NIL);
DSProviderConnection1.SQLConnection := SQLConnection1;
DSProviderConnection1.ServerClassName := iServerClassName;
SQLConnection1.Connected:=True;
ClientDataSet1 := TClientDataSet.Create(Nil);
ClientDataSet1.RemoteServer := DSProviderConnection1;
ClientDataSet1.ProviderName := iProvName;
ClientDataSet1.Close;
ClientDataSet1.Open;
ClientDataset1.FindFirst;
OutDS.CloneCursor(ClientDataSet1,False,True);
// Some more code ...
end;
Somebody have any thoughts? Examples that works?
I have XE5.1 and working on a Windows 8.1.
Update information...
It is running now.
I have made this change at the end:
procedure TDMThread.Execute;
var
par1,par2,par3,par4:Tparam;
begin
SQLConnection1 := TSQLConnection.Create(Nil);
SQLConnection1.DriverName := 'DataSnap';
SQLConnection1.Params.Values['HostName'] := 'localhost';
SQLConnection1.Params.Values['Port'] := '211';
SQLConnection1.Params.Values['DSAuthenticationPassword'] := '******';
SQLConnection1.Params.Values['DSAuthenticationUser'] := '*******';
SQLConnection1.Params.Values['DriverUnit'] := 'Data.DBXDataSnap';
SQLConnection1.Params.Values['CommunicationProtocol'] := 'tcp/ip';
SQLConnection1.Params.Values['DatasnapContext'] := 'datasnap/';
SQLConnection1.Params.Values['DriverAssemblyLoader'] := 'Borland.Data.TDBXClientDriverLoader,Borland.Data.DbxClientDriver,Version=19.0.0.0,Culture=neutral,PublicKeyToken=91d62ebb5b0d1b1b';
DSProviderConnection1:=TDSProviderConnection.Create(NIL);
DSProviderConnection1.SQLConnection := SQLConnection1;
DSProviderConnection1.ServerClassName := iServerClassName;
SQLConnection1.Connected:=True;
ClientDataSet1 := TClientDataSet.Create(Nil);
ClientDataSet1.RemoteServer := DSProviderConnection1;
ClientDataSet1.ProviderName := iProvName;
ClientDataSet1.Close;
ClientDataSet1.Open;
ClientDataset1.FindFirst;
OutDS.CloneCursor(ClientDataSet1,False,True);
// This is new
while not terminated do
Begin
Sleep(100);
end;
//
// Some more code ...
end;
I found the solution here: XE5 Android TBitmap.LoadFromStream fail inside a thread
As you can see in:
XE5 Android TBitmap.LoadFromStream fail inside a thread
its bug in XE5 - the author's solution (sleep in loop) is not valid form of waiting for the thread...
dont use localhost as hostname. on mobile device it is wrong. you need to use actual IP in your local network of machine where server is running