I am using PostThreadMessage() & GetMessage() for sending data to the queue and reading data from the queue.But i want to check whether that data is in queue or not.can u tell how it will possible to check that which i was sending by PostThreadMessage().
Look at PeekMessage with flag PM_NOREMOVE
Related
I'm trying to use Deduplication feature of Apache Pulsar.
brokerDeduplicationEnabled=true is set in standalone.conf file, But
when I send the same message from producer multiple times, I get all the messages at consumer end, is this expected behaviour ?
Isn't deduplication means content based deduplication as in AWS SQS ?
Here is my producer code for reference.
import pulsar
import json
client = pulsar.Client('pulsar://localhost:6650')
producer = client.create_producer(
'persistent://public/default/my-topic',
send_timeout_millis=0,
producer_name="producer-1")
data = {'key1': 0, 'key2' : 1}
for i in range(10):
encoded_data = json.dumps(data).encode('utf-8')
producer.send(encoded_data)
client.close()
In Pulsar, deduplication doesn't work on the content of the message. It works on the individual message. The intention isn't to deduplicate the content but to ensure an individual message cannot be be published more than once.
When you send a message, Pulsar assigns it an unique identifier. Deduplication ensures that in failure scenarios the same message doesn't get stored in (or written to) Pulsar more than once. It does this by comparing the identifier to a list of already stored identifiers. If the identifier of the message has already been stored, Pulsar ignores it. This way, Pulsar will only store the message once. This is part of Pulsar's mechanism to guarantee a message will be sent exactly once.
For more details, see PIP 6: Guaranteed Message Deduplication.
AWS Step Function
My problem is to how to sendTaskSuccess or sendTaskFailuer to Activity which are running under the state machine in AWS .
My Actual intent is to Notify the specific activities which belongs to particular State machine execution.
I successfully send notification to all waiting activities by activityARN. But my actual need is to send notification to specific activity which belong to particular state machine execution .
Example . StateMachine - SM1
There two execution on going for SM1-- SM1E1, SM1E2 . In that case I want to sendTaskSuccess to activity which belongs to SM1E1 .
follwoing code i used . But it send notification to all activities
GetActivityTaskResult getActivityTaskResult = client.getActivityTask(new GetActivityTaskRequest()
.withActivityArn("arn detail"));
if (getActivityTaskResult.getTaskToken() != null) {
try {
JsonNode json = Jackson.jsonNodeOf(getActivityTaskResult.getInput());
String outputResult = patientRegistrationActivity.setStatus(json.get("patientId").textValue());
System.out.println("outputResult " + outputResult);
SendTaskSuccessRequest sendTaskRequest = new SendTaskSuccessRequest().withOutput(outputResult)
.withTaskToken(getActivityTaskResult.getTaskToken());
client.sendTaskSuccess(sendTaskRequest);
} catch (Exception e) {
client.sendTaskFailure(
new SendTaskFailureRequest().withTaskToken(getActivityTaskResult.getTaskToken()));
}
As far as I know you have no control over which task token is returned. You may get one for SM1E1 or SM1E2 and you cannot tell by looking at the task token. GetActivityTask returns "input" so based on that you may be able to tell which execution you are dealing with but if you get a token you are not interested in, I don't think there's a way to put it back so you won't be able to get it again with GetActivityTask later. I guess you could store it in a database somewhere for use later.
One idea you can try is to use the new callback integration pattern. You can specify the Payload parameter in the state definition to include the task token like this token.$: "$$.Task.Token" and then use GetExecutionHistory to find the TaskScheduled state of the execution you are interested in and retrieve the parameters.Payload.token value and then use that with sendTaskSuccess.
Here's a snippet of my serverless.yml file that describes the state
WaitForUserInput: #Wait for the user to do something
Type: Task
Resource: arn:aws:states:::lambda:invoke.waitForTaskToken
Parameters:
FunctionName:
Fn::GetAtt: [WaitForUserInputLambdaFunction, Arn]
Payload:
token.$: "$$.Task.Token"
executionArn.$: "$$.Execution.Id"
Next: DoSomethingElse
I did a POC to check and below is the solution .
if token is consumed by getActivityTaskResult.getTaskToken() and if your conditions not satisfied by request input then you can use below line to avoid token consumption .awsStepFunctionClient.sendTaskHeartbeat(new SendTaskHeartbeatRequest().withTaskToken(taskToken))
I want to send a message on a transaction. Here is my codes:
_data = web3.toHex('xxxx');
instance.function_name(param1, param2, param3, param4, {value: web3.toWei(_price, 'ether'), from: web3.eth.accounts[0], data:_data}).then(...);
The transaction is processed successfully, But the input data message is not the _data value in the etherscan.io
can anybody help me? Thank you.
The data field in the transaction object is used when deploying a contract or when using the general sendTransaction or sendRawTransaction methods. If you are using a contract instance, the data field is ignored.
From the Solidity docs:
Object - (optional) The (previous) last parameter can be a transaction object, see web3.eth.sendTransaction parameter 1 for more. Note: data and to properties will not be taken into account.
If you want to send the data manually, use sendTransaction.
The information shown in Etherscan is the decoded data from the signed transaction describing the function call made. It is not free form user data (if that's what you're trying to insert). The first 32 bits of the data are the function signature and each 256 bit block afterwards are the parameters.
See this source for more in-depth information.
I am trying to run the Flink streaming job. I want to determine the throughput and latency for the streaming process. i have started the Kafka broker server and have incoming messages from kafka.How do i count messages per second (Throughput)?
(Like rdd.count. Is there any similar method to get the count of incoming messages)
(Complete scenerio : I have sent the message through Producer as a Json Object. I am adding some information like name as string and also System.currentTimeMills in the Json object.
During streaming , how do i obtain the sent json object through messageStream(DataStream)?)
Thanks in advance.
CODE :
/**
* Read Strings from Kafka and print them to standard out.
*/
public static void main(String[] args) throws Exception {
System.setProperty("hadoop.home.dir", "c:/winutils/");
// parse input argum ents
final ParameterTool parameterTool = ParameterTool.fromArgs(args);
if(parameterTool.getNumberOfParameters() < 4) {
System.out.println("Missing parameters!\nUsage: Kafka --topic <topic> " +
"--bootstrap.servers <kafka brokers> --zookeeper.connect <zk quorum> --group.id <some id>");
return;
}
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.getConfig().disableSysoutLogging();
env.getConfig().setRestartStrategy(RestartStrategies.fixedDelayRestart(4, 10000));
env.enableCheckpointing(5000); // create a checkpoint every 5 seconds
env.getConfig().setGlobalJobParameters(parameterTool); // make parameters available in the web interface
DataStream<String> messageStream = env
.addSource(new FlinkKafkaConsumer010<>(
parameterTool.getRequired("topic"),
new SimpleStringSchema(),
parameterTool.getProperties()));
messageStream.print();
env.execute();
}
There are a few metrics which are available in the Flink UI where you can calculate the number of events per second and stuff like that.
You can also add your own metrics where you calculate some numbers based on your requirements and this can be displayed in the Flink UI.
And lastly for specifically latency tracking maybe you can try what's explained here - latency-tracking and similarly you can get throughputs using - meters
This benchmarking application might be a good place to start. The documentation on latency tracking and the metrics available from Flink's Kafka connector should also be interesting reading.
I was able to use run SGX in hardware mode and retrieve the SigRL successfully from IAS. But I'm struggling when trying to perform the Quote attestation using their REST API. I used the REST API interface description here.
I connected successfully to the server with the HTTP POST request
https://test-as.sgx.trustedservices.intel.com:443/attestation//sgx/v1/report
But I always receive an error: 400 Bad request!?
On the client side I get msg3 as follows
ret = sgx_ra_proc_msg2(this->enclave->getContext(),
this->enclave->getID(),
sgx_ra_proc_msg2_trusted,
sgx_ra_get_msg3_trusted,
p_msg2,
size,
&p_msg3,
&msg3_size);
which returns SGX_SUCCESS.
Then I prepare the quote in the p_msg3 structure
std::string quoteStr = ConvertToString(p_msg3->quote);
quoteStr = EncodeToBase64(quoteStr);
and finally I put the quote in the JSON string which results in
{"isvEnclaveQuote": "MDIwMDAxMDBlMzBhMDAwMDA0MDA...RiMjUyYTgxOGE4NTIzMzQxZDY3"}
which is now sent as payload to the IAS.
400 Bad Request is generally returned if there is something wrong with your Quote.
Please double check whether the SPID and linkability options you used to create the Quote match and the one you used to register with IAS.