How To Check If My Website Is Up Or Down? - html

So, I'm currently using A system where I can manually say if the website is online or not; but I don't see this as "Efficient" because I won't be there 24/7. So I was wondering if there was a way to check if their website is online or not and then create a file on a server as soon as it goes down?

You can use free service like UptimeRobot. It will send you notification when the site is down or back up.

I wrote simple script in python. Script simple check website status.
Here is a link, maybe help You.
Script simple check site code response. If status code is ok - 200 then do nothing. If status code is different than 200, send email notification to declared addresses in config.ini.
Finaly, in crontab I create a log file with site statuse.
1 * * * * /usr/bin/python3 /scripts/WebPageStatusCheck/main.py >> /scripts/WebPageStatusCheck/log/WebPageStatusCheck.log 2>&1
Here check page status and return site status to main.py
import urllib.request
class CheckSiteStatus:
def __init__(self):
pass
#staticmethod
def check_site_url(url: str):
url = 'http://' + url
status = urllib.request.urlopen(url).getcode()
return status
With best regards!

You check it using
1:Ping your website
2:Go here and enter your website url to check availability
Make sure that your site is live on server.

Depends on the technology you are using for your website, you can program events when site goes up or down. Look at this for example of shutdown event in ASP.NET.

Related

What is the reason why the OneNote APIs won't return all the pages in a notebook?

I am reading around here and I am seeing multiple messages about the /pages endpoint that is not working a expected
It seems that the OneNote APIs (MS Graph or Office365) are not returning all the pages that the user can see. In particular recent pages are not shown as available.
This message is for those of you who work for Microsoft and who keep an eye on this forum. Please if you have any explanation or workaround for this we would like to hear about it.
If this is work in progress we would also like to know when the APIs can be considered stable and reliable enough to consider them OK for production use
Update:
Permissions or scopes
scopes=[
"Notes.Read",
"Notes.Read.All",
"Notes.ReadWrite",
]
This is for a device authorization flow, the device is acting as a Microsoft Online account. The app is registered to Azure as personal app but the enterprise one does the same
The authorization process is described here
What type of app/authentication flow should I select to read my cloud OneNote content using a Python script and a personal Microsoft account?
After that I am using this endpoint to get the notebooks
https://graph.microsoft.com/v1.0/users/user-id/onenote/notebooks
from the returned json I pick the endpoint for the notebook I want to read and I access the endpoint the link stored in notebook['sectionsUrl']. This call returns a sections json
From this I pick the section I want and I access the link stored in section['pagesUrl']
Each call returns the expected info excepting the last one, when I get an arbitrary low number of pages in the section I want to explore. There is nothing wrong with the format of the info, it is just incomplete or not up to date
Not sure if this is related but when I try to access the pages in a section from MS Graph Explored I am seeing the same behavior (not all the pages are reported). This is a shared notebook and I am using the owner account for all the above so it should not be a permission problem
from msal import PublicClientApplication
import requests
endpoint= "https://graph.microsoft.com/v1.0/me/onenote"
authority = "https://login.microsoftonline.com/consumers"
app=PublicClientApplication(client_id=client_id, authority=authority)
flow = app.initiate_device_flow(scopes=scopes)
# there is an interactive part here that I automated using selenium, you
# are supposed to ouse a link to enter a code and then autorize the
# device; code not shown
result = app.acquire_token_by_device_flow(flow)
token= result['access_token']
headers={'Authorization': 'Bearer ' + token}
endpoint= "https://graph.microsoft.com/v1.0/users/c5af8759-4785-4abf-9434-xxxxxxxxx/onenote/notebooks"
notebooks = requests.get(endpoint,headers=headers).json()
for notebook in notebooks['value']:
print(notebook['displayName'])
print(notebook['sectionsUrl'])
print(notebook['sectionGroupsUrl'])
# I pick a certain notebook
section=[section for section in sections if section['displayName']=="Test"][0]
endpoint=notebook['sectionsUrl']
pages=requests.get(endpoint,headers=headers).json()
for page in pages['value']:
print(page['title'])
Update2
If I use this endpoint
https://graph.microsoft.com/v1.0/users/user-id/onenote/sections/section-id/pages
I would expect to get the complete list of pages for that section.
That is not working
After reading again and again the docs I my understanding is that the approach is to
call https://graph.microsoft.com/v1.0/users/user-id/onenote/pages$fiter or search etc etc
I this correct?
Also I vaguely remember there is a way to search for a section and have it expanded so that the search returs the children too.
Am I close to understanding this?
Thank you
MM

Error: The requested URL “[no URL]”, is invalid

Originally posted as a reply to: Error: The requested URL "[no URL]", is invalid
I get this error but only with one specific website (which is my own). This must be linked to the website as it is happening on 3 different machines on 3 different networks (personal comp on personal wifi, phone on 4/3g and work pc on work network) and no other sites. Also, it happens no matter what you put after the domain name, weather its a real page or just '/sdjhlgajhsdfg'.
A reply to the other post said that it looks like somthing to do with akamai. As this is my site, i went to the CPanel and disabled the akamai options (over 24 hours ago). i do not need any kind of caching like this as it is a simple html css site with only a hand full of mostly text pages. The most complicated thing on the site is a downloadable pdf which i have actually just taken down.
The error ref number changes every time you refresh the page.
Reference #9.d7c33b8.1478565760.55ccef1
Reference #9.d7c33b8.1478566986.560a7c3
Reference #9.d7c33b8.1478567000.560b460
Any advice would be very much appreciated.
I finally found some time to contact my webserver provider.
I can see that the domain has been removed from the Akamai server.
However, the CNAME which was pointing to Akamai server was causing the
issue. I have removed the CNAME record.
after about half an hour its back up. theres some display issues with the layout, but at lease its displaying the relevant content and not the error.
When you see Invalid URL error, this indicates that the hostname (domain) is not recognized by the Akamai's network (production or staging).
More info at: https://control.akamai.com/search/kb/11327
Hope this helps.
If there is a reverse proxy in before akamai you may get this error.
Client > Reverse Proxy > Akamai > Your API, will give this error.
Let your reverse proxy strip "Host" header and sent by "Client" and try again.
That worked for me in a setup like this:
Browser > Caddy Server > Akamai > My API
in akamai i had to add a new property manager entry for the new url/cert then activate it in prod.

Setting up a Telegram bot without a server

I'm not well versed with web techniques and would like to know if there's a way - an idea would be to use setWebhook - to make a telegram bot do simple stuff (like simply repeat the same message over and over again whenever someone sends it a message) without setting up a server.
I think there might be no way around it because I need to parse the JSON object to get the chat_id to be able to send messages... but I'm hoping someone here might know a way.
e.g.
https://api.telegram.org/bot<token>/setWebHook?url=https://api.telegram.org/bot<token>/sendMessage?text=Hello%26chat_id=<somehow get the chat_id>
I've tested it with a hard-coded chat id and it works... but of course it'll always only send messages to that same chat, regardless of where it received the message.
Here is a very simple Python bot example, you can run this on your PC no need for a server.
import requests
import json
from time import sleep
# This will mark the last update we've checked
last_update = 0
# Here, insert the token BotFather gave you for your bot.
token = 'YOUR_TOKEN_HERE'
# This is the url for communicating with your bot
url = 'https://api.telegram.org/bot%s/' % token
# We want to keep checking for updates. So this must be a never ending loop
while True:
# My chat is up and running, I need to maintain it! Get me all chat updates
get_updates = json.loads(requests.get(url + 'getUpdates').content)
# Ok, I've got 'em. Let's iterate through each one
for update in get_updates['result']:
# First make sure I haven't read this update yet
if last_update < update['update_id']:
last_update = update['update_id']
# I've got a new update. Let's see what it is.
if 'message' in update:
# It's a message! Let's send it back :D
requests.get(url + 'sendMessage', params=dict(chat_id=update['message']['chat']['id'], text=update['message']['text']))
# Let's wait a few seconds for new updates
sleep(3)
Source
Bot I'm working on
That's really interesting but definitely you'll need a server to parse the JSON value and get the chat_id out of it.

Can't retrieve file content via download URL

Since about an hour, I can't retrieve file content via the download URL attribute.
Each time I try to get it, API answers a 401 (unauthorized error).
Here's the code used: https://gist.github.com/arnaudbreton/5409015
Credentials are stored in GAE datastore and successfully retrieved / refresh.
The first call to file endpoint is working but not the second call to download content.
It was working this morning.
I tried different things so far:
- Revoke client secret (found as a solution in an other thread)
- Create a new client to test
- Disconnect my APP from Drive, accept it again
Nothing seems to solve my issue.
Thanks for your help.
A fix/rollback is in progress, should be back to normal soon.
You can use
resp.alternateLink;
resp.webContentLink;
i got stucked in the same issue a day back , using downloadUrl to get the content but got it with webContentLink.
var request = gapi.client.drive.files.list();
request.execute(function (resp) {
resp.alternateLink;
resp.webContentLink;
});

Retrieving information from a web page

My application is meant to speed up the retrieval of phone call information from our telephone system.
The best way to get this information is to create a new search on the telephone system's web interface and export the results to an Excel spreadsheet which my application then imports into a DataSet.
To get the export, from the login screen, the process goes as follows:
Log in
Navigate to Reports Page
Click "Extension Detail" link
Select "Extensions" CheckBox
Select the extensions (typically all the ones currently being used) from the listbox
Specify date range
Click on Export button
It's not a big job to do it manually every day, but, for reliability, it would be great if I can make my application do this automatically the first time it starts every day.
Since more than 1 person in the company is going to use this application, having a Windows Service do it would be even better.
I don't know if it'll help, but the system is Datatex Topaz Next Generation telephone management system: http://www.datatex.co.za/downloads/index.html#TNG
Can anyone give me a basic idea how to do this?
Also, can anyone post links (in comments if need be) to pages where I can learn more about how to do this?
I have done the something similar to fetch info from a website. I cannot give you a exact answer. But the idea is to send login info to the page with form values. If the site is relying on cookies, you can use this cookie aware WebClient:
public class CookieAwareWebClient : WebClient
{
private CookieContainer cookieContainer = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest request = base.GetWebRequest(address);
if (request is HttpWebRequest)
{
(request as HttpWebRequest).CookieContainer = cookieContainer;
}
return request;
}
}
You should be aware that some sites rely on a session id being passed so the first thing I did was to fetch the session id from the page:
var client = new CookieAwareWebClient();
client.Encoding = Encoding.UTF8;
var indexHtml = client.DownloadString(*index page url*);
string sessionID = fetchSessionID(indexHtml);
Then I had to log in to the page which you can do by uploading values to the page. You can see the specific form elements with "view source" but you have to know a little HTML to do so.
var values = new NameValueCollection();
values.Add("sessionid", sessionID); //Fetched session id
values.Add("brugerid", args[0]); //Username in my case
values.Add("adgangskode", args[1]); //Password in my case
values.Add("login", "Login"); //The login button
//Logging in
client.UploadValues(*url to login*, values); //If all goes perfect, I'm logged in now
And then I could download the page I needed. In your case you may use DownloadFile(...) if the file always have the same url (something like Export.aspx?From=2010-10-10&To=2010-11-11) or UploadValues(...) where you specify the values as before but saves the result.
string html = client.DownloadString(*url*);
It seems you have a lot more steps than I did. But the principle is the same. To see what values your send to the site to login etc. you can use programs such as Fiddler (windows) which can capture the activity going on. Essential you just do exactly the same thing but watch out for session id etc. which is temporary.
The best idea is really to use some native way to fetch data, but if don't got the code, database etc. you have to do it the ugly way. You may also need a HTML parser to fetch the data (ups, you don't because you export to a file). And last but not least, keep in mind that pages can change and there is great potential to fail to login, parse etc.
Please ask for if you are uncertain what is going on.
ADDITION
The CookieAwareWebClient is not my code:
http://code.google.com/p/gardens/source/browse/Montrics/Physical.MyPyramid/CookieAwareWebClient.cs?r=26
Using CookieContainer with WebClient class
I also found some relevant threads:
What's a good tool to screen-scrape with Javascript support?
http://forums.asp.net/t/1475637.aspx
With a HTTP client, you need to do the following:
Log in, using cookies or HTTP authentication
Request a page
Submit form data
This means that you need some class or component in your program that can do HTTP, cookies, authentication and forms. With this, you do the same requests a user would do.