Scenario is
Heroku server sends web service call to Parse.com using CURL command. web services are written as Jason and are using REST.
I need to test performance of parse.com server for my website in case of 40 users hitting it at one time
As the communication between heroku server and parse.com is through REST Jason Web services so I assume I need to generate concurrent 40 calls of each web service to hit the parse.com.
Each Curl command has One user session token and some parameters in header which I configure in Jmeter HTTP request when generating loaded web service call
I need to test the scenario in which 40 concurrent users simultaneously create project (Create project is also a web service) on parse.com (There is no web service for creating users but each curl command has a user session token as a key of each user signed up on website)
Problem:
Curl Command for creating project on parse.com has one user session. So even if I enter 40 value in thread. It will create 40 projects against one user session. whereas I want 40 users creating 40 project simultaneously.
Here is the CURL command with one user session
curl -X POST -H "X-Parse-Application-Id: " -H "X-Parse-REST-API-Key:" -H "Content-Type: application/json" -H "X-Parse-Session-Token: l8beiq2zv6kf420nbno8k7or1" -d '{"projectType":"feedback","users":null,"ownerOnlyInvite":false,"topicName":"SERVICE UPDATE TOPIC","name":"SERVICE UPDATE","deadline":"2014/03/08","s3ProjectImageKey":"065D417C-EEAA-4E74-BB43-5BDCED126A58"}'
Question:
Should I use curl command in Jmeter for load testing or there is
another alternative for testing REST Jason WEb services. If I enter 40
user session tokens in HTTP Header while configuring HTTP request in
JMETER. Will it hit as 40 concurrent users creating 40 projects on
parse.com?
This can be achieved by following the steps mentioned below:
Using CSV, put all the session tokens you want to use in the CSV file. jmeter will use
1 token for every user.
Refer: http://ivetetecedor.com/how-to-use-a-csv-file-with-jmeter/
hope this will help.
Related
I'm looking for the CLI call to create a Memory store cluster that uses the keys json file?
It seems like this link/command authenticates the gcloud cli with a service account credential. If the service account has IAM policy to memorystore, and the main issue is just authentication when running the create command, this might work, but I'd like to confirm.
I've reviewed I found this:
https://cloud.google.com/memorystore/docs/memcached/creating-managing-instances
and this
https://cloud.google.com/appengine/docs/standard/python/memcache/using
but am struggling on putting it all together.
Is there an API in Ping Federate/ Ping One to validate user credentials - username and password?
Here is a scenario in which I would like to use it:
user logs in via SAML SSO to my web application
certain application feature requires that the user credentials are validated again (to sign-off some operation)
SAML SSO does not make it easy to re-validate user credentials without logging out from application, users passwords are obviously not stored in the application so the only way to validate credentials is to send them via some API to Ping to validate - however I was unable to find such API in Ping.
For example, OKTA (which offers similar services as Ping) does provide such API:
curl -v -X POST \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d '{
"username": "dade.murphy#example.com",
"password": "correcthorsebatterystaple"
}' "https://${yourOktaDomain}/api/v1/authn"
I am looking for something similar in Ping.
Yes - there are two options in PingFederate for this:
Authentication API - This enables clients to authenticate users via a REST API instead of having the adapters present login templates directly. More details here: https://docs.pingidentity.com/bundle/pingfederate-102/page/elz1592262150859.html
OAuth Resource owner password credentials grant type - If you're just looking to validate a username + password combination you could leverage PingFederate's support of OAuth ROPC grant type. It allows you to POST the credentials and get back an Access Token if it was successful. More details here: https://docs.pingidentity.com/bundle/pingfederate-102/page/lzn1564003025072.html
Karolbe, you may also wish to take a look at Adaptive Authentication feature provided by PingFederate which directly answers your second requirement as provided by you above, i.e. - certain application feature requires that the user credentials are validated again (to sign-off some operation). Here is the reference from PingIdentity website. Adaptive authentication and authorization allow you to evaluate contextual, behavioral and correlated data to make a more informed decision and gain a higher level of assurance about a user’s identity, which is what your requirement 2) is asking for. Typical use case could be, say a user tries to access a high valued application, or tries to login after a configured idle time, Adaptive authentication will force user to present authentication credentials again.
I have created a Google cloud function, and in the permissions I've added the 'Cloud Functions Invoker' role to the 3 individual users I want to be able to trigger the function.
The function is accessible at the trigger endpoint provided, similar to this:
https://us-central1-name-of-my-app.cloudfunctions.net/function-name
I have assigned myself the invoker role on the function. When I enter the URL I get a 403
Your client does not have permission to get URL /function-name from
this server.
Since I am signed into my Google account already, I had assumed I would have permissions to access this function.
If not, how can I show the authentication prompt as part of the function without exposing the entire function via allUsers?
You can't call directly the function even if you are authenticated on your browser (this feature will come later, when you will be behind a Global Load Balancer and with IAP activated).
So, to call your function you have to present an identity token (not an access token). For this, you can use the gcloud SDK with a command like this (on linux and after having initialized it with your user credentials (gcloud init))
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" https://....
You can also create an API Gateway in front of it (I wrote an article on this) and use an API Keys for example.
I am trying to use API Connect Test and Monitor tool wherein when I gave a Get request and url and with and without authorization token, I am getting this error:
Error. Invalid Request
When I do it using Postman, I get a proper 200 OK response (with and without authorization token).
I have tried for POST request also. Same works in Postman but not in IBM API Test and Monitor.
The IBM API Test and Monitor tool is a cloud-based service. Hence, it can only be used to query endpoints that are publicly available on the internet.
localhost refers to the user's computer which does not normally expose any TCP ports to the wider internet.
You can, however, use the IBM API Test and Monitor desktop app to query localhost
I'm running a process on one system that hosts a JSON API at 127.0.0.1:42000. I would like to connect this API from a remote system. In particular, I would like to route the data to a web browser.
I've tried using my browser to connect to the local IP address of the machine on that port, but the browser is reporting that there is no response. I don't know much TCP, HTTP, and the like so unfortunately I can't really think of what to try next or even what to search for. Any help would be appreciated.
EDIT
I found a work-around that does what I need. I set up an HTTP server for a directory ~/my-http-server on port 54321 using python -m SimpleHTTPServer 54321. I also set up a repeating script to dump the contents of API call into a file named api.html in that directory: watch -n1 wget 127.0.0.1:42000 -q -O - | cat >> ~/my-http-server/api.html. It is far from a perfect solution, but I am at least able to access a cached version of the API call.
Are remote system and host on the same network? if yes then get the IP address of the host system and use that ip address instead of 127.0.0.1.