Can background jobs complete if you exit a browser after submitting a request? - google-chrome

I'm new to using background jobs (and much of programming in general) - so I'm aware this must be an simple question, but I haven't found an answer in my research.
I've written a Ruby on Rails web app that I have deployed to Heroku. After a form is submitted, it runs a background process for a web scraper script that takes about 10 minutes to complete.
My question is: If I quit my browser after the request is submitted,, will the background job still run and complete? If it will not, will just closing the window still allow it to complete?
My hunch is that it will, since the request goes through the server, but I just want to make sure. Thanks for your input!

I think so, Chrome now stays open, but I'm not sure if the webpage does.

Thanks to Ben Brammer's suggestion, I just went ahead and tried it myself! (It's amazing how I failed to think of something so obvious.)
Yes, it works fine: I quit Chrome immediately after submitting the form to begin the script. I then checked, and the script went through with no issues whatsoever.

Related

5 seconds delay(pause) from django+apache

I am running a site with django+apache+ssl+mysql+cloudfron+s3.
However, I discovered that the ttfb time increases significantly when the first connection and caching setup time ends at some point.
(response starts after pause for about 5 seconds)
After uninstalling Apache, I tried debugging in debug mode and in Gunicorn.
There were no issues with ttfb in debug mode. However, I found out that I had the same problem when using Gunicorn.
So I looked all over apache, db, ssl, django template rendering, etc.
I don't know what's causing the bottleneck.
Which part should I look at more?
Oh, and I also found that if I send a request twice in a row, I get a response right away without any pause.
Please advise. I think my head is going to break...
edit : I tested it without any cdn or database page, but the result was the same.

Can chrome be used, from the command-line, to retrieve a URL's content to a file?

I've been driving myself mad trying to get curl, wget, the python request module, and others, to simply get me logged in to a website and pull page text there. I can certainly request HTML from the site, but only as an anonymous user. I've spent a few hours with tricks like chrome's "copy cURL" feature, but the website in question is smart enough to defend against login playbacks.
All I want is a way, from the command-line, to do something like:
chrome.exe --output_to_file page.html https://www.endpoint.com/auth_access_only.html
Essentially, I'm looking for chrome to do for me what cURL does, but I want the command-line invocation to be executed as me. I can see how this might open a potential security issue, but I don't mind at all if I have to do something magical to authorize my script. I'm not looking to do anything evil - I just want to be able to write scripts that are as "me" as I am.
I guess that, if it's truly unavoidable, I could suck it up and dust off Internet Explorer. I'd really rather not do that. I'd feel so dirty.
This is possible, but it's not as simple as you're thinking.
You can use the Chrome Debugging Protocol to remote-control Chrome.
You will need to write some code to make this work - I have done similar tasks using the chrome-remote-interface library for Node.js.
Make sure you understand what a browser profile is and where your profile folder lives.
If Chrome is already running using your browser profile: make sure it was launched with --remote-debugging-port=9002 or similar.
If Chrome is not already running using your browser profile: launch it with --user-data-dir="C:\path\to\your\profile" --remote-debugging-port=9002 or similar.
The "running or not" part is a bit tricky - you cannot launch more than one Chrome instance with the same browser profile, but you need to use this user profile because your login data is stored there. It may actually be easiest to create a separate browser profile that is just used for this automated task, and log in to the site there too.
Then, at a high level, your Node.js code will need to connect to Chrome, load the page, wait for the response, and save it to a file. Have a look at the example code for the chrome-remote-interface library - you can definitely piece together what you need from there.
Another option which uses the same underlying technology is to use puppeteer which is another tool to automate Chrome. It is designed to start from a fresh profile every time. If you do this, you'll need to script more interactions:
Visit the site's login page
Type the login credentials into the form and click the login button
Visit the site's authenticated page and save it to a file.
The benefit of this approach is that the result should be more reliable, preventing issues like expired login sessions.

my website takes time to load how to reduce it?

I have developed my website with asp.net and c# with MySql as back end. But even after optimizing css, Javascript and Images still takes time to load my website www.cloudionpro.com.
Please help do I need to change something in my coding or its a mysql server issue?
I loaded your site in Chrome and is reported the HTML of the page itself loaded in 93ms, of which 51ms was spent waiting for the HTML to be generated. 51ms is acceptable, but probably could be improved: it's likely you're making a lot of MySQL database calls that could be optimized (by paralleising them, or executing a query-batch).
Chrome reports the Google Maps API you're using failed to load, caused by scripting errors, which is also causing problems, open your browser's console for details of those. It looks like your site has a dependency on jQuery but your site never loads it.

Debugging why the HTML Cache isn't working

I was wondering if anybody could direct me to any tools for debugging the cache.manifest file in offline HTML5 access. I recently downloaded a program called Manifesto which allows me to look up the cache manifest on loading a page. Everything seems to working fine however it keeps on saying that the status is "uncached". Obviously, it seems like although it is checking to make sure the cache files are there, it isn't actually caching them upon load. Whats going on and more importantly, how do I figure out how to solve it?
I got it to work. To be honest, it might have been working before but because I was having some different php scripts running in the background I had to be a little careful with pulling it up.

Selenium IDE does not record events from a server, but from disk

I'm encountering an issue that Selenium IDE seems not to record a specific event on a real webserver.
However, if I save the page (including all resources) via firefox entirely to disk, open the saved file in the browser and try to record the same issue, Selenium IDE now works correctly and records the event as expected.
I'm not sure what is causing this behavior - maybe some race conditions inside Selenium IDE exists (latencies from a real webserver are higher than on a local file URL), or maybe it has something to do with URLs - but these are only quick guesses.
Does anybody have some suggestions/best practices how to track down such kind of Selenium IDE issues?
UPDATE:
I figured out my root issue, only with trial and error, but with succeess. I filed a bug at the selemium project.
The reason why it locally worked was a file not found after form submit which not happened at the serverside. It seems that the file not found error strangely prevented the bug from occuring.
However, the main part of this question isn't really answered yet, next time I still do not know how to quickly track down such issues. So for now, I'll keep it open.
I have similar issue. The Selenium IDE does not record anything from this website "http://suppliers.inwk.com". You may not have credentials to get login access, but if you can get the login page itself recorded in Selenum IDE, then I think we can come to the root cause, or atleast get a clue.