We have a Development box in our environment that will soon be moving to a Production environment. It is written in classic asp. One of the mandates as part of the disaster recovery plan is to ensure that if there is a problem loading the page, to notify the site admins. I know how to send the notification as we have a mail processor which is used to send out communications, but I am looking for help on how to do sort of like an IF statement, that if the page hasn't finished loading in x seconds, to email the site admins and continue loading the page until either it completes loading or times out. Again, we are not having any issues as of now, but we need to have something in place should there be any future problems. Again, this is done in classic asp.
So an example (30 being seconds to load):
If ServerScriptTimeout > 30 Then
response.write "Page is taking longer to load admins, please investigate"
Else
End If
Would something like this work or even be possible?
We use IIS7 on our Server
Thank you in advance for ur help.
UPDATE - just to add. We already have splash pages and notifications ready should there be a connection issue with one of our database connections. It validates the connection and then does a test query. If either of those fail, it provides the site admins with an update. I would like a similar option for page load time. I already have a script timeout in place, but wasn't sure if the script time out can work with an if statement to do something after x seconds passes before the timeout is triggered.
The best way will be to turn on IIS Logging and make sure you enable the time-taken, which is recorded in milli-seconds.
Now you can then either create a program or log parsers to monitor the IIS logs or use an IIS Log monitoring software and just set up alerts to page out if the time-take is over 30000 milliseconds.
There are numerous tools that do this, just do a search for one that will fit your needs.
Doing it in the code itself, if it times out then you will never get the alert, so I don't think that's the option you're looking for.
One option some sites use is for another application to regularly do a GET on landing page and if it fails to load or takes to long to send an alert.
There are plenty of third party services that offer this, search for "page availability test", or if you want it internal it would not be that hard to write. But the key point is that it has to be external to the IIS server.
I think I might have figured this out. Please provide your feedback on this if you wish to.
'Beginning of the page`
Starttime = Timer()
Do Until i=5382343
i=i+1
Loop
'Body of the page here
'at the bottom, type this in
TimeTest = (Timer - Starttime)
If TimeTest > 30.500000 Then
response.write "SLOW, Admins Notified"
Else
response.write "FAST"
End If
Response.Write("<br>The page was generated in : " & (Timer - Starttime) & " seconds.")
Related
I'm running our Hosted Payment Page integration the same as the official js lib:
https://github.com/globalpayments/rxp-js/blob/master/examples/hpp/process-a-payment-embedded-autoload-callback.html
All is good except this bit is very slow for the response to come back to our side with the approved/failed transaction:
https://github.com/globalpayments/rxp-js/blob/9909985b96ab5ed945614affad5f3739827f956b/examples/hpp/process-a-payment-embedded-autoload-callback.html#L16
e.g. The form gets presented, you enter your card details and click submit (on HPP) and then the 3D secure shows, does it's thing but around 4 mins later the result come back within the answer line (the above link, line 16). I'm not sure why it's so slow. Sandbox and Production are the same.
I'm opening a support case anyway, but if anyone has any ideas.
Thanks,
Gavin.
Was the firewall on the box blocking https / port 443 from the public. Obviously the libs do something. All was instant when off.
I need to setup EPL2 label printing from Netsuite. Unfortunately the company this is for is very small and they don't have much money to spend, hence they cannot buy a $1000 label printing solution.
The current system uses a linux server that then sends a file to one of the CUPS print server queues using the linux cat command. From there it goes to a Intel NetportExpress 10/100 Print Server and then to the Argox V1000+ label printer. This is via a corporate network ip address.
Instead I started looking at some cheap options:
Popup a browser window with content type text/plain and use a suitelet to populate that browser window with the EPL2 label printer codes. Then open a print dialog window so that the user can print to the label printer driver. This requires installation of the label printer driver for all users. Sadly I could not get this to print a label.
Integration from Netsuite via a Restlet to an external python application (on Linux) that can then perform the linux cat command needed to print the label. The Restlet works nice, but unfortunately there does not seem to be a way to have some sort of hook that fires when a new label custom record arrives. Therefore I have to keep on polling the Restlet from Python every 2 seconds to see if a new label is waiting to be printed. I started running this about an hour ago and so far I have made about 2500 requests without errors. My concurrency limit is 5 and I'm using 2 so that seems ok. The script does very little so I don't think there will be size limit issues. The problem is just that I wonder whether NetSuite will eventually terminate my script for doing so many requests. Not sure whether there is such a governance issue, but can't imagine that they won't eventually stop that sort of thing.
Use the http module to send data in an ajax type manner. This should be able to pickup when new data arrives instead of having to poll (not sure). The problem with this is that I assume I will need a static IP address which is sadly an expensive option.
Use Netsuite SOAP web services which might have a hook instead of polling (not sure). I think this would not be free (like Restlets) either.
So my question is whether there is a better option that I'm missing or what would you recommend. Also would I hit some sort of governance limit if I poll every 2 seconds with option 2?
Update: The polling mysteriously stopped working after 7395 requests and about 3 hours. It did not return an error that I'm aware of. The rejected requests on Integration Governance shows 0.
I used to do the emailing thing quite a bit and it works pretty well. Volume may be an issue.
Another thing to do is just get a static IP address with something like ngrok.
ngrok runs on linux/mac/windows so you'd be able to write an app that listens on a particular port. Netsuite would send an https post to that app at (for instance) https://printing.mycompany.ngrok.io and the app would handle local printing.
I believe ngrok runs about $US60/year.
the app can verify identity with some sort of timestamp and hash so that if someone does get the https address they couldn't easily use all your paper or cause a DoS situation.
We got bamboozled by a printer vender (Zebra) before we found out that we could HTTP post to most printers using PRINTER_IP:9100 and just sending the RAW ZPL/EPL as the body.
Look into: IPP enabled printers. most are these days. saves you 1000's in longrun if you have a large warehouse operation like we do
Instead of polling I would have NetSuite initiate the connection in an afterSubmit User Event script.
I've automated label printing by having NetSuite email attachments to a dedicated mail box which is monitored by a Linux server. My setup is documented here:
https://gist.github.com/michoelchaikin/80af08856144d340b335d69aa383dbe7
I am working on an android application that will show an html page that contains only some text on a tablet device. The device will be on and showing this page for long periods of time(several hours). The text on this page will get changed from time to time.
To change the text on the page I've made a separate second page that contains a form to enter the new strings into and a submit button that uses ASP to generate a new version of the first page and save it over top of the original copy. This is set up and working great, but it means that I have to refresh the page very frequently in order to ensure I am always showing the latest message.
I am looking for a way that I could trigger a refresh only when a new message is saved. That way I will not have to refresh the page every minute but the new message will still get shown in a timely manner.
No dice, HTTP is built as a stateless, pull-only (ignoring file uploads) protocol. The server can't push data to the client, the client has to actually poll the server for new information.
However, you can minimize the overhead of this by using an AJAX call with JSON as the transport protocol instead of generating entire web pages and update your page on the client side. The overhead should be minimal for almost any application.
If you were just a web-app, I would suggest looking into the various Comet frameworks.
http://www.google.com/search?q=comet+framework
But, since you have an Android shell around it, you can make a Socket connection back to your server and have the server signal when it's time to refresh. It's essentially the same, but you don't need to code up the push in JavaScript if you're more comfortable in Java.
Not sure if I really am on the right forum, but if not, just tell me. I have a page that is coded in ASP (not .net) which is used to send email. We are currently having a problem in which the page seem to be sent twice sometime. Upon checking, we found out that those who have this problem are coming from big organisation, so it was suggested that their server might cache the file for some reason.
I would like to know, is there a way in HTML (or ASP ) to prevent that from happening ? Or is it in IIS that we must set this up ?
EDIT : I forgot to mention is that sometime, the time between the two mails can be in hours, not mere seconds
Thanks,
I don't see any cache problem here.
The only solution i see is to store somewhere server side(db, file system) the list of emails sent and check the list before send them.
With this approach, you will be sure to send just one mail to the specified address avoiding double submit or other possible problem.
I do not see how this could have anything to do with caching. After all, a cached page contains the generated html, and thus it would not trigger another execution of the code that sends the email.
However, my best guess is that it has to do with users that refresh the page. To avoid this, you could implement the post/redirect pattern, where after sending the mail you redirect to another page (or the same page but with different form parameters). This way the user can refresh the page as many times as he/she wants without triggering another email being sent.
If your problem is caching, there's really nothing you can do to keep an organization from caching it.
You can try to add this code to see if it makes a difference:
Response.Expires = 0
Response.Expiresabsolute = Now() - 1
Response.AddHeader "pragma","no-cache"
Response.AddHeader "cache-control","private"
Response.CacheControl = "no-cache"
If this doesn't work, you may need to contact that organization's IT department and ask them to add a caching exception for your page/site.
A friend just asked me if I could help him in automating something that is taking up a lot of his time. He grades essays online so basically there is a website that he logs into and sits on a page refreshing it until some data appears (new essays to be graded). Instead of doing this he would like to simply be notified when there are new items that require his attention.
There is no API to work off of and a login is required to access website. What in your opinion is the right tool for this job?
I'll post my idea as an answer but I'm curious as to what everyone suggests.
Edit: He is running windows (Windows Vista). Browser doesn't really matter as long as the site runs in it.
My idea is to write a script for Firefox's Greasemonkey plug-in.
Basically he would log into the page and turn on the script which would constantly be refreshing and scrubbing the site for new items. If some are found it pops up a message and plays a noise (or something like that).
I've never worked with Greasemonkey before but it seems like something like this should be pretty simple.
You can write a small Ruby script using Waitr (see) which will actually use a browser and allow you to scrape data, or scrubyt, (examples).
Here is what scrubyt looks like. I'd recommend you do something like generate an email or IM message but can do whatever you like. Schedule it to run in CRON somewhere - beauty of this approach is it doesn't matter if his computer is on, browser is open, etc.
#simple ebay example
require 'rubygems'
require 'scrubyt'
ebay_data = Scrubyt::Extractor.define do
fetch 'http://www.ebay.com/'
fill_textfield 'satitle', 'ipod'
submit
record "//table[#class='nol']" do
name "//td[#class='details']/div/a"
end
end
puts ebay_data.to_xml