Is a small sub-worker farm feasible?
Let's say we have 100K URLs that we must test to see which are still active and which are dead, and we are trying to do this as quickly as we can using javascript, reporting the tally of results in the UI as the work progresses:
Total URLs processed: ###### Dead URLs Found: ###### Timeouts: #######
Can we create a master worker, passing it the 100K URLs, and then that worker in turn would create an array of 100 minion sub-workers, sending each minion an array containing only 1K items, and have each minion do HEAD requests for its list of URLs, reporting back to the master the request status (good, 404, etc) with each request; and the master in turn would periodically post a message back to the main window, where the UI progress counters would get incremented?
Would the master worker be able to listen to messages from its 100 minions and succcessfully update its local variables with the progress counts as they are reported (total processed, total dead, total timed out) without things getting clobbered? And then, let's say with every 100 urls processed, the master would post these tallies back to the UI?
Related
I need a watcher, because for one of my application we are getting some Not found errors in our main application in mule. So we implemented a new application like the payloads which are failed with Not found errors in the main application it will come to new application and there it will try for n number of times even though if it get failed after n number of times it will come to the Error queue. So, the errors which are come to error queue after retires i need a watcher notification in kibana. For this scenario i need a watcher and it should trigger for every 24 hours and it should notify to team via mail.
I need a kiabana watcher script for the above scenario.
how can I refresh a webpage continuously using load runner ?
I am looking for an option with loadrunner and not selenium , js or HTML.
Define, "refresh continuously."
Why do I ask this? The server does not care what is on the other end of the request. It could be a full client. It could be a test tool like LoadRunner. It could be a program written in Python. It could be a command line tool like cUrl. The server has no concept of "refresh" on the client, only responding to a request.
You could put a single page load in a loop with no thinktime between requests. This would violate a core tenet of client-server computing, that a delay exists between requests from a client during which the items returned are being processed by either the silicon or organic computing units on the client front.
lr_start_transaction("foo");
while (lr_get_transaction_duration("foo")<3600)
{
// my request code here
sleep(rand()%9000 + 1001);
}
lr_stop_transaction("foo");
On reporting systems one often observes behavior of polling to see when a report is complete. The delay between these requests is often 5-10 seconds for a refresh of the report request. This is typically handled in LoadRunner code with a looping structure which breaks when the report is actually returned. Inside of the loop it issues the request, checks the result to see if exit is warranted, waits the 5-10 seconds then loops.
You could have a script with one request, loaded in the controller with either zero or some pacing. Schedule it for an hour of execution.
many many ways
I have a application hosted on some site say "www.myapp.com" which makes a request to another application(say compute_app) hosted somewhere else by passing few parameters and then the compute_app does some computation and returns the data back to primary app.
Say I need to implement two operations : Addition and Decrement in compute_app.
Addition : This operation would need say 2 numbers and output is 1 number
Decrement : This operation needs only 1 number as input and result is the decremented number by some constant value
Let's assume compute_app is hosted at "www.heroku.com/compute_app"
Now basically I need to pass a string "add/dec" along with 2/1 numbers from "www.myapp.com" to "www.heroku.com/compute_app" which does the computation and the result is returned back to "www.myapp.com"
How should I go about designing this by making USE of GET/POST Requests.
The example that I took here is figurative.
Basically I am in need of doing REST API calls from my "myapp" to a externally hosted app (compute app- which does some data pre-processing) and then it actually makes the REST call to Servers serving LIVE data.
So control flow is like:
myapp(raw data)---->compute_app(pre-processing & make call to REST servers) .
Now Rest Servers------>return JSON response to compute_app(it again does pre-processing) and then--->return data to myapp.
myapp = ChatBot
compute_app= 3rd party app for writing Java/PHP code for preprocessing bcoz the framework doesn't allow me to write any scripting code.
I have created a URL monitor to check the availability of website.I kept all values as it is in monitor tab and added host( http://abcd.xyz.org ). I scheduled it for every 10 minutes in schedule tab. In Measure tab I put the Lower sever value of HostReachable measure to 0.
I created incident , added HostReachable measure and put its Aggregation to last value. Evaluation time frame is 1 minutes.In Action tab I put my email for notification.
Now after every 10 minutes I started getting email even I browsed the website and it's working fine. Couldn't understand if website loading fine then why incidents occurs.
Email message:
Violations
HostReachable: STG URL Monitor#abcd.xyz.org: Was 0.00 but should be higher than 0.00.
Latest logs given below:
2016-08-17 15:42:33 INFO [UrlMonitor#STG URL Monitor_0] Previous message was repeated 1 times.
2016-08-17 15:42:33 INFO [UrlMonitor#STG URL Monitor_0] Executing method: GET, URI: http://abcd.xyz.org:80/
Thanks
This could have to do with the evaluation timeframe of your Incident. If your monitor is scheduled to run every 10 minutes but your evaluation timeframe is 1 minute it means that you have 9 evaluations where there is "no data".
We typically recommend to align the monitor execution with the Incident. Can you give that a try?
Also - I typically chart these measures to see which values the monitor produces in which timeinterval. this is an easy visual verification check to make sure that e.g: the monitor works correctly and delivers data. Can you do that?
I also wanted to say that we have a very good Dynatrace Online Community and Discussion forum # http://answers.dynatrace.com. You might want to try to post future questions there as there are 100k+ Dynatrace users active on that community
Andi
I've set up a small ActiveMQ Network of Brokers to increase reliability. It consists of 3 nodes with the following properties (full config template file is available here):
ActiveMQ Version 5.13.3 (latest as of July 16)
Local LevelDB persistence adapter
NetworkConnector uri="static:(tcp://${OTHER_NODE1}:61616,tcp://${OTHER_NODE2}:61616)" with the two variables set for e.g. node2 to node1 and node3 (uni-directal conn. between all nodes).
Clients connect with failover:(tcp://node1:61616,tcp://node2:61616,tcp://node3:61616), send and retrieve messages as needed.
The failover protocol randomizes the target machine, so messages might be sent back and forth inside the cluster.
There are two (failing) scenarios:
As it is described now, some messages are not delivered, because they are not allowed to go "back". This is done to avoid loops and described in this blog post.
Activating the replayWhenNoConsumers flag as described in the blog and in NoB: Stuck Messages causes those messages to be recognized as duplicates.With enableAudit enabled, I get cursor got duplicate send ID, disabling it gives me a <MSG> paged in, is cursor audit disabled? Removing from store and redirecting to dlq.
Maybe this is trivial to fix - anybody has an idea?