I have a zabbix setup that monitors my servers. I also have cloudflare. I have setup monitoring for all the individual servers, but I would like to setup something to monitor response time for data served via cloudflare.
It would be a simple web page load that checks for response time, but of course I can't put zabbix agent on cloudflare!
Is there a way to do this?
Zabbix "web scenario" has
average download speed per second for all steps of whole scenario
download speed per second
response time
response code
see https://www.zabbix.com/documentation/current/manual/web_monitoring
There is also another web check, called "HTTP agent", to retrieve instead
page headers and content
see https://www.zabbix.com/documentation/current/manual/config/items/itemtypes/http
both are executed by Zabbix Server / Proxy, no agent needed.
Related
i am using the Live Agent REST API that allows you to programmatically create Live Agent sessions (https://help.salesforce.com/s/articleView?id=000386079&type=1) .
Based on the above documentation, I was able to complete till step 2, for step 3 where I am trying to pull messages from the server it gives me response 'ChatRequestFail' and the reason showing 'NOPOST'.
I know for sure that the agent it's trying to reach is made available for chat but did not receive any notification even after being available and being added to all the queues which he is meant to be in.
I Checked all the configurations and proxy settings and that seems to be fine . Not sure where to look into to debug this issue .
Lets say I make an elm app; it requests data from a websocket to check the price of bitcoin from say poloniex.com. I compile it to an .html file and I deploy it to say Heroku or whatever server I like on the backend.
When a user comes to my website and requests that .html file, and is then looking at the bitcoin price from the websocket request, is the user's IP address making that websocket request or is it the backend's (eg Heroku in this case) IP address making the websocket request?
I ask because I was considering two different designs. Either have my backend pull the bitcoin price data and then serve that to my users or have the users directly request the price from the source itself (i.e poloniex in this case). The latter would be less headache but won't be possible if all the requests end up coming from the backend and therefore one ip address (they would have request limits)
Edit: Bolded for people who couldn't see where the question was.
Assuming you are using the standard Elm Websocket package, elm-lang/websocket, the websocket connects with whatever URL you point it at. If you set it up like this:
subscriptions model =
listen "ws://echo.websocket.org" Echo
Then the client browser will connect directly with echo.websocket.org. The target of that websocket connection will likely see your application as a referrer, but its connection will be with the IP of the user's browser that is acting as the client.
If you instead want your backend server application to act as a proxy, you would use that URL in listen
subscriptions model =
listen "ws://myapp.com" ...
I have two websites http://www.example.com and https://www.example.com. I am using HTML5 session storage to store user preferences.
A user arrives at http://www.example.com and I load some default settings via ajax.
They browse to a page requiring login and are sent to https://www.example.com/login.html
After they are done logging in they are sent back to http://www.example.com where because they are now logged in I should fetch new settings from the server. The trouble is that http and https are different origins and can't share session storage.
Things I've tried that don't work:
Loading a page http://www.example.com/clearSession.html in an iframe that just runs sessionStorage.removeItem('key') to clear my data, but it seems that this has it's own browsing context so it doesn't work.
Things I've tried that work but I'm not wanting to use:
Using a cookie. This works great because http and https can share cookies but this means all my user settings get sent to the server with every resource request. This is usually about 4k but could be up to 1MB of data. No I can't host my resources on a different domain.
Don't cache the settings and just make the request every time to get the settings. I am doing this on older browsers as they don't support session storage but it slows down the page load and puts extra load on my database.
I can tell you how we have solved this problem, but it doesn't involve local sessionStorage. We use a server-side session to store the user's login data (username, ID, etc.) after they have been to our authentication server and back. Before they are authenticated you could still collect preference data from them by using AJAX to report these preferences back to a web service on the server that can store it in the server's session scope. This would break the RESTful model, however, because it would assume the use of server side sessions. That would depend on your server language and how you have your web services set up.
I think you will always bump into that origin problem because that is a restriction designed into local storage in general.
Switch everything to https, its a standard now.
If your API and Website making ajax calls to that API are on the same server (even domain), how would you secure that API?
I only want requests from the same server to be allowed! No remote requests from any other domain, I already have SSL installed does this mean I am safe?
I think you have some confusion that I want to help you clear up.
By the very fact that you are talking about "making Ajax calls" you are talking about your application making remote requests to your server. Even if your website is served from the same domain you are making a remote request.
I only want requests from the same server to be allowed!
Therein lies the problem. You are not talking about making a request from server-to-server. You are talking about making a request from client-to-server (Ajax), so you cannot use IP restrictions (unless you know the IP address of every client that will access your site).
Restricting Ajax requests does not need to be any different than restricting other requests. How do you keep unauthorized users from accessing "normal" web pages? Typically you would have the user authenticate, create a user session on the server, pass a session cookie back tot he client that is then submitted on every request, right? All that stuff works for Ajax requests too.
If your API is exposed on the internet there is nothing you can do to stop others from trying to make requests against it (again, unless you know all of the IPs of allowed clients). So you have to have server-side control in place to authorize remote calls from your allowed clients.
Oh, and having TLS in place is a step in the right direction. I am always amazed by the number of developers that think they can do without TLS. But TLS alone is not enough.
Look at request_referer in your HTTP headers. That tell you where the request came from.
It depends what you want to secure it from.
Third parties getting their visitors to request data from your API using the credentials those visitors have on your site
Browsers will protect you automatically unless you take steps to disable that protection.
Third parties getting their visitors to request changes to your site using your API and the visitors' credentials
Nothing Ajax specific about this. Implement the usual defences against CSRF.
Third parties requesting data using their own client
Again, nothing Ajax specific about this. You can't prevent the requests being made. You need authentication/authorisation (e.g. password protection).
I already have SSL installed does this mean I am safe
No. That protects data from being intercepted enroute. It doesn't prevent other people requesting the data, or accessing it from the end points.
you can check ip address, if You want accept request only from same server, place .htaccess in api directory or in virtualhost configuration directive, to allow only 127.0.0.1 or localhost. Configuration is based on what webserver You have.
Our Architecture is using a Push Engine to send data to the browser ,
Could anybody please tell me what is the use of Push Engine ??
( Why is it required , as the same thing can be achivied using a normal AJAX programming )\
Please guide me .
let's say your visiting a website, and the website is updated continuously. Your browser needs to keep updating the data that you're viewing, meaning that the browser needs to keep communicating with the server, and get the updates.
you can use ajax to make requests every few seconds, each time fetch more data from the server. Problem is - you need to make a lot of ajax calls, and you open a connection (a socket) for each, and eventually, it is a very slow process. if the interval between the requests is large, you will have a delay between the updates on the servers, and the updates in your browser.
to solve that, we can manipulate the HTTP calls - keep the request (the connection) open, and continuously send data. that way, when the server wants to send something to the client (browser), there's an open connection, and it doesn't need to way for the next ajax call by the browser.
HTTP servers have a timeout on the requests, so just before request times out, browser will close it and make a new one.
another (better) method is using XMPP protocal, which is used in chats like facebook's and msn.
AJAX is a pull method - it requires the client to connect to the server. If you have some information that you want to display live - for example a live score in a football game - the AJAX call has to be made at regular intervals - even when there is no data waiting on the server. A Push Engine is the reverse - the client and server maintain a connection and the server pushes data when there is data to be sent.