Reading byte array the second time causes error? - actionscript-3

I am using the following code to read an error message from a byte array and it works fine the first time but if I try to access it the second time it throws an error:
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
StandardError is of type InboundPipe?
The error is:
Error: Error #3212: Cannot perform operation on a NativeProcess that is not running.
even though the process is running (process.running is true). It's on the second call to readUTFBytes that seems to be the cause.
Update:
Here is the code calling the same call one after another. The error happens on the next line and process.running has not changed from true. Happens on the second call.
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
I also found out the standardError is a InboundPipe instance and implements IDataInput.
Update 2:
Thanks for all the help. I found this documentation when viewing the bytesAvailable property.
[Read Only] Returns the number of bytes of data available for reading
in the input buffer. User code must call bytesAvailable to ensure that
sufficient data is available before trying to read it with one of the
read methods.
When I call readUTFBytes() it resets the bytes available to 0. So when I read it a second time and there are no bytes available it causes the error. The error is or may be incorrect in my opinion or the native process.running flag is incorrect.
I looked into seeing if it has a position property and it does not, at least not in this instance.

Could you try to set position to zero before reading process, especially before repetitive access:
Moves, or returns the current position, in bytes, of the file pointer into the ByteArray object. This is the point at which the next call to a read method starts reading or a write method starts writing.
//ByteArray example
var source: String = "Some data";
var data: ByteArray = new ByteArray();
data.writeUTFBytes(source);
data.position = 0;
trace(data.readUTFBytes(data.bytesAvailable));
data.position = 0;
trace(data.readUTFBytes(data.bytesAvailable));

This was a tricky problem since the object was not a byte array although it looks and acts like one (same methods and almost same properties). It is an InboundPipe that also implements IDataInput.
I found this documentation when viewing the bytesAvailable property.
[Read Only] Returns the number of bytes of data available for reading
in the input buffer. User code must call bytesAvailable to ensure that
sufficient data is available before trying to read it with one of the
read methods.
When I call readUTFBytes() it resets the bytes available to 0. So when I call it a second time and there are no bytes available it causes the error. The error is or may be incorrect in my opinion or the native process.running flag is incorrect although I have reason to believe it's the former.
The solution is to check bytesAvailable before calling read operations and store the value if it needs to be accessed later.
if (process.standardError.bytesAvailable) {
errorData = process.standardError.readUTFBytes(process.standardError.bytesAvailable);
errorDataArray.push(errorData);
}
I looked into seeing if it has a position property and it does not, at least not in this instance.

Related

How to use RANSAC method to fit a line in Matlab

I am using RANSAC to fit a line to my data. The data is 30X2 double, I have used MatLab example to write the code given below, but I am getting an error in my problem. I don't understand the error and unable to resolve it.
The link to Matlab example is
https://se.mathworks.com/help/vision/ref/ransac.html
load linedata
data = [xm,ym];
N = length(xm); % number of data points
sampleSize = 2; % number of points to sample per trial
maxDistance = 2; % max allowable distance for inliers
fitLineFcn = polyfit(xm,ym,1); % fit function using polyfit
evalLineFcn =#(model) sum(ym - polyval(fitLineFcn, xm).^2,2); % distance evaluation function
[modelRANSAC, inlierIdx] = ransac(data,fitLineFcn,evalLineFcn,sampleSize,maxDistance);
The error is as follows
Error using ransac Expected fitFun to be one of these types:
function_handle
Instead its type was double.
Error in ransac>parseInputs (line 202) validateattributes(fitFun,
{'function_handle'}, {'scalar'}, mfilename, 'fitFun');
Error in ransac (line 148) [params, funcs] = parseInputs(data, fitFun,
distFun, sampleSize, ...
Lets read the error message and the documentation, as they tend to have all the information required to solve the issue!
Error using ransac Expected fitFun to be one of these types:
function_handle
Instead its type was double.
Hum, interesting. If you read the docs (which is always the first thing you should do) you see that fitFun is the second input. The error says its double, but it should be function_handle. This is easy to verify, indeed firLineFun is double!
But why? Well, lets read more documentation, right? polyfit says that it returns an array of the coefficient values, not a function_hanlde, so indeed everything the documentation says and the error says is clear about why you get the error.
Now, what do you want to do? It seems that you want to use polyfit as the function to fit with ransac. So we need to make it a function. According to the docs, fitFun has to be of the form fitFun(data), so we just do that, create a function_handle for this;
fitLineFcn=#(data)polyfit(data(:,1),data(:,2),1);
And magic! It works!
Lesson to learn: read the error text you provide, and the documentation1, all the information is there. In fact, I have never used ransac, its just reading the docs that led me to this answer.
1- In fact, programmers tend to reply with the now practically a meme: RTFM often, as it is always the first step on everything programming.

kafka-python 1.3.3: KafkaProducer.send with explicit key fails to send message to broker

(Possibly a duplicate of Can't send a keyedMessage to brokers with partitioner.class=kafka.producer.DefaultPartitioner, although the OP of that question didn't mention kafka-python. And anyway, it never got an answer.)
I have a Python program that has been successfully (for many months) sending messages to the Kafka broker, using essentially the following logic:
producer = kafka.KafkaProducer(bootstrap_servers=[some_addr],
retries=3)
...
msg = json.dumps(some_message)
res = producer.send(some_topic, value=msg)
Recently, I tried to upgrade it to send messages to different partitions based on a definite key value extracted from the message:
producer = kafka.KafkaProducer(bootstrap_servers=[some_addr],
key_serializer=str.encode,
retries=3)
...
try:
key = some_message[0]
except:
key = None
msg = json.dumps(some_message)
res = producer.send(some_topic, value=msg, key=key)
However, with this code, no messages ever make it out of the program to the broker. I've verified that the key value extracted from some_message is always a valid string. Presumably I don't need to define my own partitioner, since, according to the documentation:
The default partitioner implementation hashes each non-None key using the same murmur2 algorithm as the java client so that messages with the same key are assigned to the same partition.
Furthermore, with the new code, when I try to determine what happened to my send by calling res.get (to obtain a kafka.FutureRecordMetadata), that call throws a TypeError exception with the message descriptor 'encode' requires a 'str' object but received a 'unicode'.
(As a side question, I'm not exactly sure what I'd do with the FutureRecordMetadata if I were actually able to get it. Based on the kafka-python source code, I assume I'd want to call either its succeeded or its failed method, but the documentation is silent on the point. The documentation does say that the return value of send "resolves to" RecordMetadata, but I haven't been able to figure out, from either the documentation or the code, what "resolves to" means in this context.)
Anyway: I can't be the only person using kafka-python 1.3.3 who's ever tried to send messages with a partitioning key, and I have not seen anything on teh Intertubes describing a similar problem (except for the SO question I referenced at the top of this post).
I'm certainly willing to believe that I'm doing something wrong, but I have no idea what that might be. Is there some additional parameter I need to supply to the KafkaProducer constructor?
The fundamental problem turned out to be that my key value was a unicode, even though I was quite convinced that it was a str. Hence the selection of str.encode for my key_serializer was inappropriate, and was what led to the exception from res.get. Omitting the key_serializer and calling key.encode('utf-8') was enough to get my messages published, and partitioned as expected.
A large contributor to the obscurity of this problem (for me) was that the kafka-python 1.3.3 documentation does not go into any detail on what a FutureRecordMetadata really is, nor what one should expect in the way of exceptions its get method can raise. The sole usage example in the documentation:
# Asynchronous by default
future = producer.send('my-topic', b'raw_bytes')
# Block for 'synchronous' sends
try:
record_metadata = future.get(timeout=10)
except KafkaError:
# Decide what to do if produce request failed...
log.exception()
pass
suggests that the only kind of exception it will raise is KafkaError, which is not true. In fact, get can and will (re-)raise any exception that the asynchronous publishing mechanism encountered in trying to get the message out the door.
I also faced the same error. Once I added json.dumps while sending the key, it worked.
producer.send(topic="first_topic", key=json.dumps(key)
.encode('utf-8'), value=json.dumps(msg)
.encode('utf-8'))
.add_callback(on_send_success).add_errback(on_send_error)

MmFile Empty Files throws Exception in Destructor

I'm having trouble getting MmFile to work in a directory scanning algorithm.
When I'm stress-testing it as follows
foreach (dent; dirEntries(..)) {
const size_t K = ...;
const ulong size = ...;
scope auto mf = new MmFile(dent.name, MmFile.Mode.read, size, null, win)
}
I can't find a combination of size and win that works for all cases when reading data.
When I set
const size = 0;
const win = 64*1024;
the length gets calculated correctly.
But when dent.name is an existing empty file it crashes in the destruction of the MMFile throwing a
core.exception.FinalizeError...std.exception.ErrnoException#std.mmfile.d(490): munmap failed (Invalid argument).
And I can't recover this error by catching core.exception.FinalizeError because its thrown in the destructor. I haven't tried
try { delete mm; } catch (core.exception.FinalizeError) { ; /* pass */}
Maybe that works.
Is this the default behavior when calling mmap in C on existing empty files?
If so I think that MmFile should check for this error during construction.
The except gets thrown also when I replace scope with an explicit delete.
For now I simply skip calling MmFile on empty files.
It sounds like a bug to me for MmFile to barf on empty files regardless of what mmap itself does. Please report it.
On a side note, I'd advise against using either scope or delete, as they're going to be removed from the language, because they're both unsafe. std.typecons.scoped replaces scope in this context if you want to do that (though it's still unsafe). And as for delete, destroy will destroy the object without freeing its memory, and core.memory can be used to free memory if you really want to, but in general, if you want to be worrying about freeing memory, then you should be manually managing your memory (with malloc and free and possibly emplace) and not using the GC at all.

Correct way to use InternetReadFile() asynchronously

I've got code that's performing HTTP requests using WinInet API's asynchronously. In general, my code works, but I'm confused about the 'right' way to do things. In the documentation for InternetReadFile(), it states:
To ensure all data is retrieved, an application must continue to call
the InternetReadFile function until the function returns TRUE and the
lpdwNumberOfBytesRead parameter equals zero.
but in asynchronous mode, it may (or may not) return false, and an error of ERROR_IO_PENDING, indicating it'll do the work asynchronously, and call my callback when finished. If I read the documentation literally, it seems that the asynchronous calls could also just do a partial read of the requested buffer, and require the caller to keep calling InternetReadFile until a read of 0 bytes is encountered.
A typical implementation using InternetReadFile() synchronously would look something like this:
while(InternetReadFile(Request, Buffer, BufferSize, &BytesRead) && BytesRead != 0)
{
// do something with Buffer
}
but with the possibility that any one call to InternetReadFile() could signal that it's going to do the work asynchronously (and perhaps read part, but not all of your request), it becomes much more complicated. If I turn to MSDN sample code for guidance, the implementation is simple, simply calling InternetReadFile() once, and expecting a single return having read the entire requested buffer either instantly or asynchronously. Is this the correct way to use this function, or is MSDN Sample Code ignoring the possibility that InternetReadFile() will only read part of the requested buffer?
After a more careful reading of the asynchronous example, I see now that it is reading repeatedly until a successful read of 0 bytes is encountered. So to answer my own question, you must call InternetReadFile() over and over again, and be prepared for either a synchronous or asynchronous response.
Reading InternetReadFile() repeatedly until it returns TRUE and BytesRead is 0 is a correct way to use InternetReadFile(), but not enough if you work asynchronously.
As MSDN says
When running asynchronously, if a call to InternetReadFile does not result in a completed transaction, it will return FALSE and a subsequent call to GetLastError will return ERROR_IO_PENDING. When the transaction is completed the InternetStatusCallback specified in a previous call to InternetSetStatusCallback will be called with INTERNET_STATUS_REQUEST_COMPLETE.
So InternetReadFile() may return FALSE and set the last error to ERROR_IO_PENDING value if you work in asynchronous mode.
When InternetSetStatusCallback will be called again with INTERNET_STATUS_REQUEST_COMPLETE, the lpvStatusInformation parameter will contain the address of an INTERNET_ASYNC_RESULT structure (see InternetStatusCallback callback function). The INTERNET_ASYNC_RESULT.dwResult member will contain the result of asynchronous operation (TRUE or FALSE since you called InternetReadFile) and the INTERNET_ASYNC_RESULT.dwError will contain a error code only if dwResult is FALSE.
If dwResult is TRUE then your Buffer contains data read from Internet, and the BytesRead contains the number of bytes read asynchronously.
So one of the most important things when you work asynchronously, the Buffer and the BytesRead must be persistent between InternetStatusCallback calls, i.e. must not be allocated on the stack. Otherwise it has undefined behaviour, causes memory corruption, etc.

'Invalid Handle object' when using a timer inside a function in MatLab

I am using a script in MatLab that works perfectly fine by itself, but I need to make a function out of it.
The script read a .csv file, extract all values, start a timer, and at each tick displays the corresponding coordinates extracted from the .csv, resulting in a 3D animation of my graph.
What I would like is to give it the location of the .csv, so that it starts displaying the graphs for this csv.
Here is what I have come up with:
function handFig(fileLoc)
csv=csvread(fileLoc,1,0);
both = csv(:,2:19);
ax=axes;
set(ax,'NextPlot','replacechildren');
Dt=0.1; %sampling period in secs
k=1;
hp1=text(both(k,1),both(k,2),both(k,3),'thumb'); %get handle to dot object
hold on;
hp2=text(both(k,4),both(k,5),both(k,6),'index');
hp3=text(both(k,7),both(k,8),both(k,9),'middle');
hp4=text(both(k,10),both(k,11),both(k,12),'ring');
hp5=text(both(k,13),both(k,14),both(k,15),'pinky');
hp6=text(both(k,16),both(k,17),both(k,18),'HAND');
L1=plot3([both(k,1),both(k,16)],[both(k,2),both(k,17)],[both(k,3),both(k,18)]);
L2=plot3([both(k,4),both(k,16)],[both(k,5),both(k,17)],[both(k,6),both(k,18)]);
L3=plot3([both(k,7),both(k,16)],[both(k,8),both(k,17)],[both(k,9),both(k,18)]);
L4=plot3([both(k,10),both(k,16)],[both(k,11),both(k,17)],[both(k,12),both(k,18)]);
L5=plot3([both(k,13),both(k,16)],[both(k,14),both(k,17)],[both(k,15),both(k,18)]);
hold off;
t1=timer('TimerFcn','k=doPlot(hp1,hp2,hp3,hp4,hp5,hp6,L1,L2,L3,L4,L5,both,t1,k)','Period', Dt,'ExecutionMode','fixedRate');
start(t1);
end
And the doplot function used:
function k=doPlot(hp1,hp2,hp3,hp4,hp5,hp6,L1,L2,L3,L4,L5,pos,t1,k)
k=k+1;
if k<5000%length(pos)
set(hp1,'pos',[pos(k,1),pos(k,2),pos(k,3)]);
axis([0 255 0 255 0 255]);
set(hp2,'pos',[pos(k,4),pos(k,5),pos(k,6)]);
set(hp3,'pos',[pos(k,7),pos(k,8),pos(k,9)]);
set(hp4,'pos',[pos(k,10),pos(k,11),pos(k,12)]);
set(hp5,'pos',[pos(k,13),pos(k,14),pos(k,15)]);
set(hp6,'pos',[pos(k,16),pos(k,17),pos(k,18)]);
set(L1,'XData',[pos(k,1),pos(k,16)],'YData',[pos(k,2),pos(k,17)],'ZData',[pos(k,3),pos(k,18)]);
set(L2,'XData',[pos(k,4),pos(k,16)],'YData',[pos(k,5),pos(k,17)],'ZData',[pos(k,6),pos(k,18)]);
set(L3,'XData',[pos(k,7),pos(k,16)],'YData',[pos(k,8),pos(k,17)],'ZData',[pos(k,9),pos(k,18)]);
set(L4,'XData',[pos(k,10),pos(k,16)],'YData',[pos(k,11),pos(k,17)],'ZData',[pos(k,12),pos(k,18)]);
set(L5,'XData',[pos(k,13),pos(k,16)],'YData',[pos(k,14),pos(k,17)],'ZData',[pos(k,15),pos(k,18)]);
else
k=1;
set(hp1,'pos',[pos(k,1),pos(k,2),pos(k,3)]);
axis([0 255 0 255 0 255]);
set(hp2,'pos',[pos(k,4),pos(k,5),pos(k,6)]);
set(hp3,'pos',[pos(k,7),pos(k,8),pos(k,9)]);
set(hp4,'pos',[pos(k,10),pos(k,11),pos(k,12)]);
set(hp5,'pos',[pos(k,13),pos(k,14),pos(k,15)]);
set(hp6,'pos',[pos(k,16),pos(k,17),pos(k,18)]);
end
However, when I run handFig('fileName.csv'), I obtain the same error everytime:
??? Error while evaluating TimerFcn for timer 'timer-7'
Invalid handle object.
I figured that it might come from the function trying to create a new 'csv' and 'both' everytime, so I tried removing them, and feeding the function the data directly, without results.
What is exactly the problem? Is there a solution?
Thanks a lot!
I think it's because when you call doPlot in the timer for the first time, you pass in t1 as an argument, and it might not exist the first time.
Does doPlot need t1 at all? I'd suggest modifying it so it's not used, and then your call to:
t1=timer('TimerFcn','k=doPlot(hp1,hp2,hp3,hp4,hp5,hp6,L1,L2,L3,L4,L5,both,k)','Period', Dt,'ExecutionMode','fixedRate');
Note the missing t1 in the doPlot call.
Either that, or initialise your t1 before you create the timer so it has some value to pass in.
Update (as an aside, can you use pause(Dct) in a loop instead? seems easier)
Actually, now I think it's a problem of scope.
It took a bit of digging to get to this, but looking at the Matlab documentation for function callbacks, it says:
When MATLAB evaluates function handles, the same variables are in scope as when the function handle was created. (In contrast, callbacks specified as strings are evaluated in the base workspace.)
You currently give your TimerFcn argument as a string, so k=doPlot(...) is evaluated in the base workspace. If you were to go to the matlab prompt, run handFig, and then type h1, you'd get an error because h1 is not available in the global workspace -- it's hidden inside handFig.
That's the problem you're running into.
However, the workaround is to specify your function as a function handle rather than a string (it says function handles are evaluated in the scope in which they are created, ie within handFig).
Function handles to TimerFcn have to have two arguments obj and event (see Creating Callback Functions). Also, that help file says you have to put doPlot in its own m-file to have it not evaluate in the base Matlab workspace.
In addition to these two required input arguments, your callback
function can accept application-specific arguments. To receive these
input arguments, you must use a cell array when specifying the name of
the function as the value of a callback property. For more
information, see Specifying the Value of Callback Function Properties.
It goes through an example of what you have to do to get this working. Something like:
% create timer
t = timer('Period', Dt,'ExecutionMode','fixedRate');
% attach `k` to t so it can be accessed within doPlot
set(t,'UserData',k);
% specify TimerFcn and its extra arguments:
t.TimerFcn = { #doPlot, hp1, hp2, hp3, ...., both };
start(t)
Note -- the reason k is set in UserData is because it needs to be somehow saved and modified between calls to doPlot.
Then modify your doPlot to have two arguments at the beginning (which aren't used), and not accept the k argument. To extract k you do get(timer_obj,'UserData') from within doPlot:
function k=doPlot(timer_obj, event, hp1,hp2,hp3,.....)
k = get(timer_obj,'UserData');
.... % rest of code here.
% save back k so it's changed for next time!
set(timer_obj,'UserData',k);
I think that's on the right track - play around with it. I'd highly recommend the mathworks forums for this sort of thing too, those people are whizzes.
This thread from the mathworks forum was what got me started and might prove helpful to you.
Good luck!