i am very new to micro controller world. i am using pic16f877, please solve simple error in below description - mplab-x-5.50

i am getting error in this code ....can any body tell this solution
void interrupt ISR (void)
{
if (RCIF == 1) //*error: expected ';' after top level declarator***
{
UART_Buffer = RCREG; // Read The Received Data Buffer
PORTB = UART_Buffer; // Display The Received Data On LEDs
RCIF = 0; // Clear The Flag
}
}

Related

Concurrency issue in flink stream job

I have a flink streaming job which does user fingerprinting based on click-stream event data. Code snippet is attached below
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// setting event time characteristic for processing
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
DataStream<EventData> input = ConfluentKafkaSource.
createKafkaSourceFromApplicationProperties(env);
final OutputTag<EventData> emailPresentTag = new OutputTag<>("email-present") {
};
final OutputTag<EventData> dispatchIdPresentTag = new OutputTag<>("dispatch-id-present") {
};
final OutputTag<EventData> residueTag = new OutputTag<>("residue") {
};
SingleOutputStreamOperator<EventData> splitStream = input
.process(new ProcessFunction<EventData, EventData>() {
#Override
public void processElement(
EventData data,
Context ctx,
Collector<EventData> out) {
if (data.email != null && !data.email.isEmpty()) {
// emit data to side output for emailPresentTag
ctx.output(emailPresentTag, data);
} else if (data.url != null && data.url.contains("utm_source=starling")) {
// emit data to side output for dispatchIdPresentTag
ctx.output(dispatchIdPresentTag, data);
} else {
// emit data to side output for ip/campaign attributing
ctx.output(residueTag, data);
}
}
});
DataStream<EventData> emailPresentStream = splitStream.getSideOutput(emailPresentTag);
DataStream<EventData> dispatchIdPresentStream = splitStream.getSideOutput(dispatchIdPresentTag);
DataStream<EventData> residueStream = splitStream.getSideOutput(residueTag);
// process the 3 split streams separately based on their corresponding logic
DataStream<EventData> enrichedEmailPresentStream = emailPresentStream.
keyBy(e -> e.lbUserId == null ? e.eventId : e.lbUserId).
window(TumblingProcessingTimeWindows.of(Time.seconds(30))).
process(new AttributeWithEmailPresent());
DataStream<EventData> enrichedDispatchIdPresentStream = dispatchIdPresentStream.
keyBy(e -> e.lbUserId == null ? e.eventId : e.lbUserId).
window(TumblingProcessingTimeWindows.of(Time.seconds(30))).
process(new AttributeWithDispatchPresent());
DataStream<EventData> enrichedResidueStream = residueStream.
keyBy(e -> e.lbUserId == null ? e.eventId : e.lbUserId).
window(TumblingProcessingTimeWindows.of(Time.seconds(30))).
process(new AttributeWithIP());
DataStream<EventData> dataStream = enrichedEmailPresentStream.union(enrichedDispatchIdPresentStream, enrichedResidueStream);
final OutputTag<EventData> attributedTag = new OutputTag<>("attributed") {
};
final OutputTag<EventData> unattributedTag = new OutputTag<>("unattributedTag") {
};
SingleOutputStreamOperator<EventData> splitEnrichedStream = dataStream
.process(new ProcessFunction<EventData, EventData>() {
#Override
public void processElement(
EventData data,
Context ctx,
Collector<EventData> out) {
if (data.attributedEmail != null && !data.attributedEmail.isEmpty()) {
// emit data to side output for emailPresentTag
ctx.output(attributedTag, data);
} else {
// emit data to side output for ip/campaign attributing
ctx.output(unattributedTag, data);
}
}
});
//splitting attributed and unattributed stream
DataStream<EventData> attributedStream = splitEnrichedStream.getSideOutput(attributedTag);
DataStream<EventData> unattributedStream = splitEnrichedStream.getSideOutput(unattributedTag);
// attributing backlog unattributed events using attributed stream and flushing resultant attributed
// stream to kafka enriched_clickstream_event topic.
attributedStream = attributedStream.windowAll(TumblingProcessingTimeWindows.of(Time.seconds(30))).
process(new AttributeBackLogEvents()).forceNonParallel();
attributedStream.
addSink(ConfluentKafkaSink.createKafkaSinkFromApplicationProperties()).
name("Enriched Event kafka topic sink");
//handling unattributed events. Flushing them to mysql
Properties dbProperties = ConfigReader.getConfig().get(REPORTINGDB_PREFIX);
ObjectMapper objectMapper = new ObjectMapper();
unattributedStream.addSink(JdbcSink.sink(
"INSERT IGNORE INTO events_store.unattributed_event (event_id, lb_user_id, ip, event) values (?,?,?,?)",
(ps, t) -> {
ps.setString(1, t.eventId);
ps.setString(2, t.lbUserId);
ps.setString(3, t.ip);
try {
ps.setString(4, objectMapper.writeValueAsString(t));
} catch (JsonProcessingException e) {
logger.error("[UserFingerPrintJob] "+ e.getMessage());
}
},
JdbcExecutionOptions.builder()
.withBatchIntervalMs(Long.parseLong(dbProperties.getProperty(REPORTINGDB_FLUSH_INTERVAL)))
.withMaxRetries(Integer.parseInt(dbProperties.getProperty(REPORTINGDB_FLUSH_MAX_RETRIES)))
.build(),
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl(dbProperties.getProperty(REPORTINGDB_URL_PROPERTY_NAME))
.withDriverName(dbProperties.getProperty(REPORTINGDB_DRIVER_PROPERTY_NAME))
.withUsername(dbProperties.getProperty(REPORTINGDB_USER_PROPERTY_NAME))
.withPassword(dbProperties.getProperty(REPORTINGDB_PASSWORD_PROPERTY_NAME))
.build())).name("Unattributed event ReportingDB sink");
env.execute("UserFingerPrintJob");
Steps involved:
Splitting the stream to 3 streams based on 3 criteria and then attributing them with email and then collecting as union of these 3 streams.
Events which are unattributed in above step are sinked to mysql as backlog unattributed events.
Events which are attributed are passed on to AttributeBackLogEvents ProcessFunction. I'm assuming issue is here.
In AttributeBackLogEvents function, I'm fetching all events from mysql which have cookie-id(lb_user_id) or ip present in input attributed events. Those events are then attributed and percolated down to the kafka sink along with input attributed events. For some of these unattributed events, I'm seeing duplicate attributed events with timestamp difference of 30seconds(which is the processing time window). What i think is that while one task of AttributeBackLogEvents function is still processing, a seaparate task is fetching the same events from mysql and both the tasks are processing simultaneously. Basically i want to enforce record level lock in mysql/code so that same event don't get picked up. One way may be to use select for update, but given the size of data can lead to deadlock(or will this approach be useful?). I tried forceNonParallel() method too but isn't helpful.

esp32 arduino ide Web Socket stream json no errors but no values

Thank you for considering this problem.
I'm streaming a small json from a Web socket and can see the stringified json arrive to the client because it prints to the serial monitor, but then it deserializes to a 1 or 0 instead of my key:value pairs. I just want it to parse the json so that the rest of my program can use the values. I get no errors. Tried both Dynamic and Static json docs. Tried triple the memory requirement.
Arduino:
#include <WiFi.h>
#define ARDUINOJSON_ENABLE_ARDUINO_STREAM 1
#include <ArduinoJson.h>
#include <StreamUtils.h>
const char* ssid = "ssid";
const char* password = "pw";
const char* host = "10.0.0.250";
void setup()
{
Serial.begin(115200);
delay(10);
// We start by connecting to a WiFi network
int loopCount = 0;
StaticJsonDocument<384> doc;
DeserializationError error;
void loop()
{
//delay(5000);
++loopCount;
if (loopCount > 1) return;
Serial.print("connecting to ");
Serial.println(host);
// Use WiFiClient class to create TCP connections
WiFiClient client;
const int httpPort = 1337;
if (!client.connect(host, httpPort)) {
Serial.println("connection failed");
return;
}
// This will send the request to the server
client.print(String("GET ") + "HTTP/1.1\r\n" +
"Host: " + host + "\r\n" +
"Connection: close\r\n\r\n");
unsigned long timeout = millis();
while (client.available() == 0) {
if (millis() - timeout > 5000) {
Serial.println(">>> Client Timeout !");
client.stop();
return;
}
}
// Read all the lines of the reply from server and print them to Serial
while (client.available() > 0) {
ReadLoggingStream loggingClient(client, Serial);
error = deserializeJson(doc, loggingClient);
}
Serial.println("");
if (error) {
Serial.print(F("deserializeJson() failed: "));
Serial.println(error.f_str());
return;
}
//this doesn't work
int id = doc["id"]; // Should be 5 but I get 0 for every value
Serial.print("id: "); Serial.println(id);
}
/*Serial monitor:
14:21:25.905 ->
07:16:36.574 -> WiFi connected
07:16:36.574 -> IP address:
07:16:36.574 -> 10.0.0.113
07:16:36.574 -> connecting to 10.0.0.250
07:16:36.849 -> "{\"id\":5,\"nom\":\"whynot\",\"delayStart\":200,\"rampePWM\":11,\"pulseWelding\":200,\"speedBalayage\":0.4,\"speedWelding\":0.5,\"speedWire\":1.1,\"balayage\":0.8,\"pulseWire\":5,\"retractWire\":7}"
07:16:36.849 -> id: 0
*/
The tcp-socket is in my node express setup. The file projet.json is only the json seen above ^^ no white space.
var net = require('net');
var serverN = net.createServer(function(socket) {
fs.readFile("./data/projet.json", 'utf-8', (err, data) => {
if (err) {
throw err;
}
socket.write(JSON.stringify(data));
socket.pipe(socket);
});
});
serverN.listen(1337, '10.0.0.250');
I can only show you how i use it to get the right values. i use a DynamicJsonDocument in my solution:
DynamicJsonDocument root(2048);
DeserializationError err = deserializeJson(root, http.getString());
String TravelTimes = root["travelTime"];
Otherwise you can also try to output the values directly via the jsonobject
JsonObject object = doc.to<JsonObject>();
const char* id = object["id"];
The code parses the JSON string but then calls JsonDocument::to<T>() to obtain a JsonObject. This method clears the document - from https://arduinojson.org/v6/api/jsondocument/to/
Clears the JsonDocument and converts it to the specified type.
JsonDocument::as<T>() should be used instead:
JsonObject object = doc.as<JsonObject>();
From https://arduinojson.org/v6/api/jsondocument/as/
Casts JsonDocument to the specified type.
Unlike JsonDocument::to(), this function doesn’t change the content of the JsonDocument.
You can also use serializeJsonPretty() to display the JsonObject on the serial output. Instead of:
JsonObject object = doc.to<JsonObject>();
Serial.println(object);
this can be done with:
serializeJsonPretty(doc, Serial);
Thanks to bblanchon of ArduinoJson - The node socket was stringifying the json twice. I changed the socket to socket.write(data) instead of socket.write(JSON.stringify(data)) and it works.
See full explanation here
https://github.com/bblanchon/ArduinoJson/issues/1507
Thanks again!

MFT Encoder (h264) High CPU utilization

I am able successfully to encode the data by H264 using Media Foundation Transform (MFT) but unfortunately I got a very high CPU(when I comment in the program the calling of this function I got low CPU).It is few steps followed to get the encoding so I can't do anything to improve it?Any idea can help
HRESULT MFTransform::EncodeSample(IMFSample *videosample, LONGLONG llVideoTimeStamp, MFT_OUTPUT_STREAM_INFO &StreamInfo, MFT_OUTPUT_DATA_BUFFER &encDataBuffer)
{
HRESULT hr;
LONGLONG llSampleDuration;
DWORD mftEncFlags, processOutputStatus;
//used to set the output sample
IMFSample *mftEncodedSample;
//used to set the output sample
IMFMediaBuffer *mftEncodedBuffer = NULL;
memset(&encDataBuffer, 0, sizeof encDataBuffer);
if (videosample)
{
//1=set the time stamp for the sample
hr = videosample->SetSampleTime(llVideoTimeStamp);
#ifdef _DEBUG
printf("Passing sample to the H264 encoder with sample time %i.\n", llVideoTimeStamp);
#endif
if (SUCCEEDED(hr))
{
hr = MFT_encoder->ProcessInput(0, videosample, 0);
}
if (SUCCEEDED(hr))
{
MFT_encoder->GetOutputStatus(&mftEncFlags);
}
if (mftEncFlags == MFT_OUTPUT_STATUS_SAMPLE_READY)
{
hr = MFT_encoder->GetOutputStreamInfo(0, &StreamInfo);
//create empty encoded sample
if (SUCCEEDED(hr))
{
hr = MFCreateSample(&mftEncodedSample);
}
if (SUCCEEDED(hr))
{
hr = MFCreateMemoryBuffer(StreamInfo.cbSize, &mftEncodedBuffer);
}
if (SUCCEEDED(hr))
{
hr = mftEncodedSample->AddBuffer(mftEncodedBuffer);
}
if (SUCCEEDED(hr))
{
encDataBuffer.dwStatus = 0;
encDataBuffer.pEvents = 0;
encDataBuffer.dwStreamID = 0;
//Two shall after this step points on the same address
encDataBuffer.pSample = mftEncodedSample;
hr = MFT_encoder->ProcessOutput(0, 1, &encDataBuffer, &processOutputStatus);
}
}
}
SafeRelease(&mftEncodedBuffer);
return hr;
}
The first key is to ensure you have configured the sink with MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS. I also set the MF_LOW_LATENCY attribute.
// error checking omitted for brevity
hr = attributes->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE);
hr = attributes->SetUINT32(MF_SINK_WRITER_DISABLE_THROTTLING, TRUE);
hr = attributes->SetUINT32(MF_LOW_LATENCY, TRUE);
The other key is to ensure you are selecting the native format for the output of the source. Otherwise, you will remain very disappointed. I describe this in detail here.
I should also mention that you should consider creating the transform sample and memory buffer once at the beginning, instead of recreating them on each sample received.
Good luck. I hope this helps.

net.rim.device.api.io.file.FileIOException: File system out of resources in blackberry

Below code throws net.rim.device.api.io.file.FileIOException: File system out of resources this exception.
Can anyone tell me how it happens?
public Bitmap loadIconFromSDcard(int index) {
FileConnection fcon = null;
Bitmap icon = null;
InputStream is=null;
try {
fcon = (FileConnection) Connector.open(Shikshapatri.filepath + "i"
+ index + ".jpg", Connector.READ);
if (fcon.exists()) {
byte[] content = new byte[(int) fcon.fileSize()];
int readOffset = 0;
int readBytes = 0;
int bytesToRead = content.length - readOffset;
is = fcon.openInputStream();
while (bytesToRead > 0) {
readBytes = is.read(content, readOffset, bytesToRead);
if (readBytes < 0) {
break;
}
readOffset += readBytes;
bytesToRead -= readBytes;
}
EncodedImage image = EncodedImage.createEncodedImage(content,
0, content.length);
image = resizeImage(image, 360, 450);
icon = image.getBitmap();
}
} catch (Exception e) {
System.out.println("Error:" + e.toString());
} finally {
// Close the connections
try {
if (fcon != null)
fcon.close();
} catch (Exception e) {
}
try {
if (is != null)
is.close();
is = null;
} catch (Exception e) {
}
}
return icon;
}
Thanks in advance...
Check this BB dev forum post - http://supportforums.blackberry.com/t5/Java-Development/File-System-Out-of-Resources/m-p/105597#M11927
Basically you should guaranteedly close all connections/streams as soon as you don't need them, because there is a limited number of connection (be it a file connection or http connection) handles in OS. If you execute several loadIconFromSDcard() calls at the same time (from different threads) consider redesign the code to call them sequentially.
UPDATE:
To avoid errors while reading the content just use the following:
byte[] content = IOUtilities.streamToBytes(is);
And since you don't need file connection and input stream any longer just close them right after reading the content (before creating EncodedImage):
is.close();
is = null; // let the finally block know there is no need to try closing it
fcon.close();
fcon = null; // let the finally block know there is no need to try closing it
Minor points:
Also in the finally block it is worth set fcon = null; explicitly after you close it, I believe this can help old JVMs (BB uses Java 1.3 - rather old one) to decide quicker that the object is ready to be garbage collected.
I also believe that the order you close streams in the finally block may be important - I'd change to close is first and then fcon.

Elmah and DbEntityValidationException

I have setup a project with both Elmah and EF4.1 Code First.
The project is throwing a System.Data.Entity.Validation.DbEntityValidationException, but Elmah is not providing enough detail to determine what validation is failing. All that is logged is:
System.Data.Entity.Validation.DbEntityValidationException: Validation failed for one or more entities. See 'EntityValidationErrors' property for more details.
Is there a way to make Elmah expand and log the EntityValidationErrors property?
List<IUserFeedback> errors = new List<IUserFeedback>();
try
{
_dbContext.SaveChanges();
Updated(this, HasUnsavedChanges);
}
catch (DbEntityValidationException ex)
{
foreach (var x in ex.EntityValidationErrors)
{
foreach (var y in x.ValidationErrors)
{
if (!String.IsNullOrWhiteSpace(y.PropertyName))
errors.Add(new UserFeedback() {
FeedbackFlags = TypeOfUserFeedbackFlags.Error,
Message = String.Format("Unable to save {0} due to an issue with its \"{1}\" value. The error returned was \"{2}\"",x.Entry.Entity, y.PropertyName, y.ErrorMessage)
});
else
errors.Add(new UserFeedback() {
FeedbackFlags = TypeOfUserFeedbackFlags.Error,
Message = String.Format("Unable to save {0} due to the error \"{1}\"", x.Entry, y.ErrorMessage)
});
}
}
}
return errors;