I started using debugger for PHP recently and it's working well, but I have one problem. When I open page from Chrome I get notification from PHP Storm "Incomming connection from XDebug"...and I can accept or ignore it, which is ok. Then I can debug. And that's connection for the first page, i.e. index.php
Problem is that on page I'm opening I'm displaying i.e. some images and I'm including some CSS or JS file. For all those files Chrome is making additional requests and for all of them I'm getting also that message "Incomming connection from XDebug". So for every valid request I have to click Ignore about 20 times more...to ignore every other file after first one, containing page source.
I tried adding path to dirs I want to ignore in "Skipped paths" under Languages & Frameworks -> PHP -> Debug, but it's not working.
Can it be set somehow? Maybe some obvious ignores - I can't debug i.e. image files.
Related
I'm working on a large project, which uses IE (IE11) to display local htm pages. (Yes, IE is required. I can't use a different browser). We aren't using a web server; everything is pulled from the local drive. No http requests are made during this.
We're planning to send parts of it out to remote locations to use on those sites. Atm I'm using a self-extracting rar, so that the htm, js, css, and media files can all be put in the proper locations. After deployment, the file structure on the remote location should mirror that of the Dev system. Everything works fine on the Dev system. However, at the remote sites some of the anchor links stop working. After some testing, it appears that if the file I'm trying to link to originated from a different computer, the link does nothing. But if I were to make a new file, copy everything from the old file to the new file, and replace the old one with the new one, the link works. The files are identical in content, but only the one that originated from the remote site works.
Link
The above would work if foo.htm was created on that computer, but not if it was created on a different computer (such as the Dev computer). This issue appears to occur with .htm/html files, as well as .css and .js files. It might also occur for media files, but I didn't check.
I imagine there is some kind of security setting or something that needs to be changed, but I can't figure out what. We plan to distribute to a lot of sites, so I can't manually fix the problem for every site.
Any idea why this is happening?
Edit: No errors pop up in the console when I click on the links. I didn't check out the network tab, since everything is local, but that might be a thing to try tomorrow. Also, I didn't know about file blocking. I'll check that too. If that were to be the case, how can I unblock the files automatically? This may be deployed to people who don't have a strong grasp of computers.
From the description of the issue, it looks like when you send the zip file to another machine then on that machine file gets blocked.
Below are some ways to unblock the file or to prevent the blocking. You can choose the suitable way as per your requirement.
The easiest way to unblock the file is to, right-click the zip file then click on properties and click on the Unblock Checkbox and click the OK button. Then after try to unzip the file. It will unblock all files together.
You can also try to unblock the zip file by using the PowerShell command below.
dir "folderpath" -Recurse | Unblock-File
Note: Make sure to run the script as an administrator.
Further, you can set a group policy to prevent blocking the file.
Group policy path:
User Configuration-> Administrative Templates-> Windows Components-> Attachment Manager
Policy name: Do not preserve zone information in file attachments
You can double click this policy and disabled it.
Note: Please note this will leave you vulnerable to malware & is not recommended..
I have a Powershell script that opens an html file saved locally and parses the file. When it is run from the server where it was created it works fine. However, when I run it from the server where it will need to reside going forward, I keep getting an IE Security pop up. The code is:
$Source = Get-Content -Path $SrcFilePath -raw
$HTML = New-Object -Com "HTMLFile"
$HTML.IHTMLDocument2_write($Source)
$results = $HTML.all.tags('table') | % OuterHtml
The variable $SrcFilePath is the path to where the HTML file is stored locally. When it gets to the line
$HTML.IHTMLDocument2_write($Source)
The pop up appears.
Once I click "Close", the script resumes. The issue is, this script is going to be automated so no one will physically be around to click the "Close" button when it is appears.
I've tried creating a share to where the html file is saved and adding it to the Trusted sites in IE. I've changed the security level and I've also added about:security_scriptdriver64.exe to Local Intranet in IE, but none of these have worked. Any ideas?
DETAILS UPDATE
The server I'm running this on is Windows 2016 and the IE version is 11.
The html file I'm parsing is saved locally on the server i.e. E:\temp\webpage.html
When I created the share and added it to the trusted sites (I tried with both NetBIOS and FQDN), the option "Require server verification" was unchecked.
I've also tried adding about:security_scriptdriver64.exe to Trusted Sites.
I've also turned off "IE Enhanced Security Configuration" through Server Manager for both Admin and Users and this did not work.
UPDATE 2
I had mentioned in my last comment that I had put about:security_scriptdriver64.exe to Trusted Sites, tested, and it did not work. I started looking into using Invoke-WebRequest and playing with that. Was having issues so I thought I would go back to the IHTMLDocument2_write solution. I thought I would use a bare bones html file to start with that I had one table, with one row, and one column. My plan was to slowly start adding tags from the original html file to the barebones. I got as far as updating the title tag when I thought I would try again with the original. It worked.
I removed the about:security_scriptdriver64.exe from Trusted Sites and tried again. It still worked. I left it and everything worked fine so back to the drawing board.
I started again by adding that about line to Trusted Sites. Didn't work so started with the barebones HTML again. There is a block of JavaScript in the file. When I added it, to the barebones file, it didn't work. When I removed it, it worked. So, it's the JavaScript. I still don't understand why it started working last week. The HTML file is always generated with that JavaScript block. SMH.
I am trying to improve my site load speed. When checking on the network requests, there is something that doesn't make sense to me:
How can the content that is going to be requested (css and js files) is being download before the html content has been downloaded?
I have tried with a hard reload and empty cache from chrome, but this has happened again. Morover, the files seem to be dowload from the server and not from the cache.
Another thing I don't get, is why chrome is pating the bar chart almost all with blue, when the majority of the time is waiting for the server to respond (TTFB).
Thanks in advance!
There's a couple of possibilities:
The HTML doesn't download all at once. It's possible for the server to send part of the page, pause, then send the rest. If the part that's sent first contains references to CSS, JS, image, font, or other files, the browser can start downloading those files as soon as it sees them referenced.
HTTP 2.0 supports "server push", a scheme where the web server can indicate to the client through HTTP headers that it should start downloading specific other files. Judging from some of the file names in your network tab, you're using Cloudflare; they use server push for some features, including "Rocket Loader".
Unfortunately I cannot put the code here because despite all kinds of debugging I cannot point to the problem to present the problematic segment of the code. If I try to load a HTML which in turn has other requests (js,css,png, etc.), it loads all except one... The server/browser hangs for some time and then after about a minute or so does actually load! If I try to test the server with individual manual requests of these very URLs in the HTML file, it works fine.
While tying to load the HTML file, Chrome Network tab shows "pending..." of one request or sometimes two. But ultimately all the URLs requested are served. Thats what bugs me...
I tried to set http.globalAgent.maxSockets to 100 as suggested here as the HTML file would make more than 8 requests for the different js, css, etc. This did not help either.
I have reached a deadend. Any help would be appreciated.
PROBLEM:
Today, we modified a static html web page in a client's website -
we added a couple of images and modified the font. And FTPed the file to client's web server.
We realized we made a mistake with the font-size, corrected it, and FTPed the file again.
Even with a 100 refreshes, the website was displaying only the file (with wrong font) that we had FTPed the first time.
We FTPed the corrected file several times, but the file with the wrong font was the only file being served by the web server.
OUR GUESS:
We think that the web server cached the file that we had FTPed the first time, and is serving it back to us on subsequent requests even though the file had changed.
We tried the following techniques (but were unsuccessful):
We added a parameter to the querystring (?R=33343545)
We tried the technique suggested below - i.e. posting to the webpage in question, but got a "405 Method not allowed. The HTTP verb used to access this page is not allowed."
http://www.mnot.net/blog/2006/02/18/invalidation
Please advise if we were on the right path and if there is anything else that we can try in such situations ?
EDIT:
We would like to find out if there is a way (similar to the 2 methods above) to do it just from the browser..and not touch the settings on the webserver.