I am using JMeter to create 100 simultaenous threads in an infinite loop. I use the DNS of the Elastic BS that Amazon gave me. I have Min Instances set to 5 and Max = 5. Will this just magically work without me having to worry about internal IP addresses and have the load distributed evenly between all inistances?
What has me thinking is that I stumbled on this link in my research .
http://jmeter.512774.n5.nabble.com/jmeter-amazon-ec2-load-balancing-elb-td529294.html
yes - it should do. If you are testing you should be able to mimic the experience of the guy in the post and find out.
Related
Problem Set
Assuming an application with two networking classes (these are the backend for every outgoing and incoming network connection of the application), I would like to monitor the network load created by the application itself in Python 2.7.
Already did multiple searches on SO and the Net but did not get the results or ideas i was looking for.
I would like to achieve the solution without 3rd-party application dependencies (such as Wireshark or similar products as I am not in control of end-user OS. Application shall be 100% Cross Platform).
networking classes:
mysql-Driver based on mysql-connector-python.
"general" networking class to check availability of hosts using socket library shipped with python
Question(s)
Are there python libraries that can achieve this without any 3rd party product beeing installed?
is the approach so off that another approach should be much easier / more likely possible?
I'm using small webapp which is hosted on openshift and that app is actually a web-serviŅe. Sinse my app is scalable it is maintained by haproxy load baloancer. But I noticed that my app was hibernated after some period of time.
Why does it happen?
Is haproxy able to maintaine web-service application?
as it turned out there were just terminology problems as scalable does not mean that it will not idling. thus the issue has been resolved.
As said here "Openshift suspends and serializes apps without much activity after a given period, and the first time they 'wake' they deserialize and this takes time."
I've tried few different setups of HTTP load balancing in google compute engine.
I used this as a reference :
https://cloud.google.com/compute/docs/load-balancing/http/cross-region-example
And I'm at scenario with 3 instances where I simulate the outage on one of them.
And I can see that one instance is not healthy which is great, so my question would be how can I see which one of them is not up. I mean when this is a real scenario I want to immediately know which one is it.
Any suggestions?
You can use the gcloud tool to get detailed health information. Based on that tutorial, I would run:
gcloud compute backend-services get-health NAME
I am not sure how to view this information in the developer console.
See more:
https://cloud.google.com/compute/docs/load-balancing/http/backend-service#health_checking
I've been struggling with a GCE issue for a while and I would like to ask for some help. On the developer console I see large number of API requests that I don't know where originated from. I'm pretty sure that I'm not running any services / jobs that can burn the API quota. I see many errors as well. All my VM instances and other resources are working fine, but the issue concern me. I linked a few screens from the dev console about whats happening. I would really appreciate any help!
Thanks!
https://lh4.googleusercontent.com/-7_HaZLZxvF0/VC14TMVCKoI/AAAAAAAAE6Q/0b8NvjxttMQ/s1600/01.png
https://lh5.googleusercontent.com/-TdXJu2VQ7qA/VC14mcy2AOI/AAAAAAAAE6g/O8VPcoRJpfc/s1600/03.png
IT seems like you're using the Google Compute Engine API. When using gcloud compute commands or the Google Compute Engine console tool, you're making requests to the API. Also check if you have an app that uses the service account to make requests to GCE. You can visit this link for more information
I'm looking for a scaling mechanism on OpenStack cloud, and then I found OpenShift. My scenario is something like this: we have a distributed system with many agents stand on many nodes. One node contain a Message Broker that direct the traffic. We want to monitor the Message Broker node, if a queue is full, we scale out the agent nodes handle that queue. In brief, we monitor one node to scale other nodes.
We used OpenStack cloud now. In OpenStack, I found heat and ceilometer which are able to create alarm and scale out nodes. However, alarms are based only on general info like CPU, RAM, Network usage, etc (not inside-VM info).
Then I search for a layer above: PaaS. I found OpenShift can handle scaling apps. But as I knew, the scaling mechanism of OpenShift is: duplicate the apps based on network traffic, then put an HAProxy in front.
Am I right that OpenShift can't monitor software specific data. Is there any other tool that suit our scenario?
You can try using this script (https://github.com/openshift/origin-server/blob/master/cartridges/openshift-origin-cartridge-haproxy/usr/bin/haproxy_ctld.rb) to control how your gears are scaled, but I believe that it is still experimental. Make sure that you read through all of the comments and understand what you are doing before making any changes. You might also consider spinning up a second scaled application to test this on before messing with your production application.