Dashing job get_event? - html

In my dashboard I have a job where I would like to get a value from my widget.
# :first_in sets how long it takes before the job is first run. In this case, it is run immediately
SCHEDULER.every '1s', :first_in => 0 do |job|
send_event('my_widget', { value: rand(400) })
end
This is the code to send data to my widget, but how could I get the data? What is the "get_event" that I'm looking for?

From this issue in dashing github repository, You can use server sent events to get data from dashing dashboard.
Dashing provides data from the same API end point
http://dashingdemo.herokuapp.com/events
Excerpt from returned data
data: {"current":77,"last":82,"id":"valuation","updatedAt":1461840437}
data: {"current":104578,"last":89199,"id":"karma","updatedAt":1461840437}
data: {"value":62,"id":"synergy","updatedAt":1461840437}

Related

AWS SDK in java - How to get activities from worker when multple execution on going for a state machine

AWS Step Function
My problem is to how to sendTaskSuccess or sendTaskFailuer to Activity which are running under the state machine in AWS .
My Actual intent is to Notify the specific activities which belongs to particular State machine execution.
I successfully send notification to all waiting activities by activityARN. But my actual need is to send notification to specific activity which belong to particular state machine execution .
Example . StateMachine - SM1
There two execution on going for SM1-- SM1E1, SM1E2 . In that case I want to sendTaskSuccess to activity which belongs to SM1E1 .
follwoing code i used . But it send notification to all activities
GetActivityTaskResult getActivityTaskResult = client.getActivityTask(new GetActivityTaskRequest()
.withActivityArn("arn detail"));
if (getActivityTaskResult.getTaskToken() != null) {
try {
JsonNode json = Jackson.jsonNodeOf(getActivityTaskResult.getInput());
String outputResult = patientRegistrationActivity.setStatus(json.get("patientId").textValue());
System.out.println("outputResult " + outputResult);
SendTaskSuccessRequest sendTaskRequest = new SendTaskSuccessRequest().withOutput(outputResult)
.withTaskToken(getActivityTaskResult.getTaskToken());
client.sendTaskSuccess(sendTaskRequest);
} catch (Exception e) {
client.sendTaskFailure(
new SendTaskFailureRequest().withTaskToken(getActivityTaskResult.getTaskToken()));
}
As far as I know you have no control over which task token is returned. You may get one for SM1E1 or SM1E2 and you cannot tell by looking at the task token. GetActivityTask returns "input" so based on that you may be able to tell which execution you are dealing with but if you get a token you are not interested in, I don't think there's a way to put it back so you won't be able to get it again with GetActivityTask later. I guess you could store it in a database somewhere for use later.
One idea you can try is to use the new callback integration pattern. You can specify the Payload parameter in the state definition to include the task token like this token.$: "$$.Task.Token" and then use GetExecutionHistory to find the TaskScheduled state of the execution you are interested in and retrieve the parameters.Payload.token value and then use that with sendTaskSuccess.
Here's a snippet of my serverless.yml file that describes the state
WaitForUserInput: #Wait for the user to do something
Type: Task
Resource: arn:aws:states:::lambda:invoke.waitForTaskToken
Parameters:
FunctionName:
Fn::GetAtt: [WaitForUserInputLambdaFunction, Arn]
Payload:
token.$: "$$.Task.Token"
executionArn.$: "$$.Execution.Id"
Next: DoSomethingElse
I did a POC to check and below is the solution .
if token is consumed by getActivityTaskResult.getTaskToken() and if your conditions not satisfied by request input then you can use below line to avoid token consumption .awsStepFunctionClient.sendTaskHeartbeat(new SendTaskHeartbeatRequest().withTaskToken(taskToken))

Duplicates on Apache Beam / Dataflow inputs even when using withIdAttribute

I am trying to ingest data from a 3rd party API into a Dataflow pipeline. Since the 3rd party doesn't make webhooks available, I wrote a custom script that constantly polls their endpoint for more data.
The data is refreshed every 15 minutes, but since I don't want to miss any datapoints and I want to consume as soon as new data is available, my "crawler" runs every 1 minute. The script then sends the data to a PubSub topic. Easy to see that PubSub will receive about 15 repeated messages for each datapoint in the source.
My first attempt to identify and discard those repeated messages was to add a custom attribute to each PubSub message (eventid), created from a hash of its [ID + updated_time] at source.
const attributes = {
eventid: Buffer.from(`${item.lastupdate}|${item.segmentid}`).toString('base64'),
timestamp: item.timestamp.toString()
};
const dataBuffer = Buffer.from(JSON.stringify(item))
publisher.publish(dataBuffer, attributes)
Then I configured Dataflow with a withIdAttribute() (which is the new idLabel(), based on Record IDs).
PCollection<String> input = p
.apply("ReadFromPubSub", PubsubIO
.readStrings()
.fromTopic(String.format("projects/%s/topics/%s", options.getProject(), options.getIncomingDataTopic()))
.withTimestampAttribute("timestamp")
.withIdAttribute("eventid"))
.apply("OutputToBigQuery", ...)
With that implementation, I was expecting that when the script sends the same datapoint a second time, the repeated eventid would be the same and the message discarded. But for some reason, I still see duplicates on the output dataset.
Some questions:
Is there a clever way to ingest the data to dataflow from that 3rd party API if they don't provide webhooks?
Any ideas on why dataflow is not discarding the messages on this situation?
I know about the 10-minute restriction for deduplication on dataflow, but I see duplicated data even on the 2nd insertion (2 minutes).
Any help will be greatly appreciated!
I think you are on the right track, instead of the hash I recommend to use timestamps. A better way to to this is by using windows. Review this document which filters data that is outside of the window.
Regarding the additional duplicate data, if you are using pull subscriptions and the acknowledgement deadline is reached before having the data processed the message will be resent as per the at-least-once delivery. In this case change the acknowledgement deadline, the defaults is 10 seconds.

How to schedule Laravel 5 job to get data from external JSON file, and store value in database?

I'm currently working on a project in Laravel, and I want to schedule a job that grabs a value (the price of Bitcoin) from an external API (JSON file) and stores this value in my database every few minutes.
So far, I have created a job using the artisan command: artisan make:job UpdateBitcoinMarketPrice. But I've no idea what to include in my public function handle() in side of the Job class that was created.
I have fathomed that I can call this job regularly from App\Console\Kernel.php with the following function:
protected function schedule(Schedule $schedule){
// $schedule->command('inspire')
// ->hourly();
$schedule->job(new UpdateBitcoinMarketPrice)->everyFiveMinutes();}
Should I, for example, create a new Model that stores said value? Then create a new Object every-time this run?
Should I then call the first row of the database should I wish to return the value?
Job classes are very simple, normally containing only a handle() method which is called when the job is processed by the queue. You can use the contructor to inject any parameter or serialize a model so you can use it in your handle method.
So to keep it bold you can make the api call on the handle method and store the response in the databse. Knowing that this is going to fire the api call as a background job.
Something along the lines of:
public function __construct(User $user)
{
//In this case Laravel serilizes the User model example so you could use it on your background job.
//This can be anything that you need in order to make the call
$this->user = $user;
}
//Injecting as an example ExtrernalServieClass or Transformer(to transform api response).
public function handle(ExternalServiceClass $service, Transformer $transform)
{
//Here you can make the call to the api.
//Get the response parse it
// Store to database
$response = $service->postRequest($someUri, $someParams);
$parsedResponse = $transform->serviceResponse($response);
DatabaseModel::firstOrCreate($parsedResponse);
}
}
The handle method is called when the job is processed by the queue. Note that you are able to type-hint dependencies on the handle method of the job, like in the example above. The Laravel service container automatically injects these dependencies.
Now since you are going to run the job everyFiveMinutes() you have to be careful since if the previous job is not completed by default, scheduled tasks will be run even if the previous instance of the task is still running.
To prevent this, you may use the withoutOverlapping method:
$schedule->job(new UpdateBitcoinMarketPrice)->everyFiveMinutes()->>withoutOverlapping();

How to retrieve the rest api complete response in vugen?

I am trying to retrieve the complete json response in VUGEN. I am new to writing script in VUGEN. I am using web-HTTP/HTML protocol and just wrote a simple script to call a rest service with POST.
Action()
{
web_rest("POST: http://losthost:8181/DBConnector/restServices/cass...",
"URL=http://losthost:8181/DBConnector/restServices/oep_catalog_v1",
"Method=POST",
"EncType=raw",
"Snapshot=t868726.inf",
HEADERS,
"Name=filter", "Value=upc=123456789", ENDHEADER,
"Name=env", "Value=qa", ENDHEADER,
LAST);
return 0;
}
I don't know what to do next. I searched on the internet to get any command to pull response value. I got web_reg_save_param but it just pulls one value. I need the complete response saved in a file or string.
Please help.
VuGen provides several APIs to extract response data.
For example, you can do the boundary based correlation with empty left and right boundary. The sample below saves the web_rest response (body of donuts.js) in the parameter CorrelationParameter3.
web_reg_save_param_ex(
"ParamName=CorrelationParameter3",
"LB=",
"RB=",
SEARCH_FILTERS,
"Scope=Body",
LAST);
web_rest("GET: donuts.js",
"URL=http://adobe.github.io/Spry/data/json/donuts.js",
"Method=GET",
"Snapshot=t769333.inf",
LAST);
This process of locating, extracting and replacing dynamic values is called “correlation”.
You can read more about correlations in LoadRunner correlations kept simple blog post.
Your manager owes your training and a mentor for a period if you are asked to perform in this capacity

Evaluate code once the view is rendered in rails

In rails I've built a sort of cronjob, but once the view is rendered there some code I would like to run like a 'signoff' on the task processed.
Where would I put code so that it is run at the absolute end of processing (once the view is rendered)
How does rails process html does it buffer? (Would it flush HTML to the user as its rendered or once its rendered?)
You can render the view to a string, catch any Timeouts that could happen while rendering, log the results of the action, and return the string:
begin
#elements = Element.find(:all)
html = render_to_string
# Store the result
Result.create(:element_count => #elements.count)
rescue Timeout::Error
# Store the result of the call as failed?
Result.create(:element_count => 0)
end
send_data html, :disposition => 'inline', :type => 'text/html'
Some other things that you can do to achieve your goal could be:
You can use a rake task instead of a controller action, if the code that you need to execute is only triggered by your cron job.
Instead of directly making a request using wget in your cron job, you can call a script that will make a request to your controller, inspect the output of the request, and then log the result (maybe by calling a new action in the controller).
To pick an element from the rendered HTML, you could read the value using javascript/JQuery:
$(document).ready(function() {
var completion = $('#items_processed').val();
});