Validate markup of password-protected sites with W3C - html

I have an online app that I am wanting to validate the HTML markup of against the W3C validator.
Problem is users need to log in first to access them.
How do I go about validating these pages?

With this html validator extension. Or by Ctrl-U, copy-paste into the w3c page.

You have to run the service they provide on your server:
http://validator.w3.org/docs/install.html
Thats what we've done, works great.
I think it's best practice to be validating your sites source when it's in development not when its in production... Treat this just like running unit tests. You shouldn't have markup errors go live either.

Assuming you mean markup validation, log in yourself, go to the page you want, view source, and use the direct input option.

I assume you mean w3 validator service. You can copy paste the html contents into the validator or save the html into a file and upload it. If you have Firefox Web developer extension, it provides a 'validate local HTML' option.

OmniValidator for Firefox:
Omnivalidator sends the content of the currently visible page to one
or more validation services configured in the extension preferences.
These validators may be publicly hosted (the defaults are the publicly
hosted W3C Markup Validator and Validator.nu) or hosted locally. The
results are retrieved from the validators, parsed, and displayed in a
collapsable error display panel and summarized with an icon and
tooltip on the Omnivalidator button, which can be placed on a browser
toolbar. Validation is initiated either by clicking on the
Omnivalidator button or by configuring URLs which are automatically
validated with one or more validators in the extension preferences.

Related

Disable HTML autofix in Chrome

I am creating NODE.js app that use Nunjucks as view engine. I added tests to verify if the html created is valid (for now I just wanted to check if all tags are properly closed). In order to do this i spin up application, go to the site using headless chrome, making a snapshot and running validation code on output files.
The problem is that browsers try to fix HTML code automatically. They close tags by themself in order to create more or less valid HTML. Is there a way to disable this feature?
I would like to be sure that i created valid HTML document using nunjucks rather then counting that browser will fix it itself.
If you validate the source directly, Chrome will not change the HTML. Try this in your browser: view-source:https://stackoverflow.com/questions/65091531/disable-html-autofix-in-chrome
If you use the developer panel to inspect the DOM, Chrome tries to correct poor html. 😊
You could maybe just do a GET request to the application, and validate the result, instead of using Chrome.

Saving static HTML page generated with ReactJS

Background:
I need to allow users to create web pages for various products, with each page having a standard overall appearance. So basically, I will have a template, and based on the input data I need the HTML page to be generated for each product. The input data will be submitted via a web form, following which the data should be merged with the template to produce the output.
I initially considered using a pure templating approach such as Nunjucks, but moved to ReactJS as I have prior experience with the latter.
Problem:
Once I display the output page (by adding the user input to the template file with placeholders), I am getting the desired output page displayed in the browser. But how can I now obtain the HTML code for this specific page?
When I tried to view the source code of the page, I see the contents of 'public/index.html' stating:
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
Expectedly, the same happens when I try to save (Save As...) the html page via the browser. I understand why the above happens.
But I cannot find a solution to my requirement. Can anyone tell me how I can download/save the static source code for the output page displayed on the browser.
I have read possible solutions such as installing 'React/Redux Development Extension' etc... but these would not work as a solution for external users (who cannot be expected to install these extensions to use my tool). I need a way to do this on production environment.
p.s. Having read the "background" info of my task, do let me know if you can think of any better ways of approaching this.
Edit note:
My app is currently actually just a single page, that accepts user data via a form and displays the output (in a full screen dialog). I don't wish to have these output pages 'published' on the website, and these are simply to be saved/downloaded for internal use. So simply being able to get the "source code" for the dislayed view/page on the browser and saving this to a file would solve my problem. But I am not sure if there is a way to do this?
Its recommended that you use a well-known site generator such as Gatsby or Next for your static sites since "npx create-react-app my-app" is for single page apps.
(ref: https://reactjs.org/docs/create-a-new-react-app.html#recommended-toolchains)
If I'm understanding correctly, you need to generate a new page link for each user. Each of your users will have their own link (http/https) to share with their users.
For example, a scheduling tool will need each user to create their own "booking page", which is a generated link (could be on your domain --> www.yourdomain.com/bookinguser1).
You'll need user profiles to store each user's custom page, a database, and such. If you're not comfortable, I'll use something like an e-commerce tool that will do it for you.
You can turn on the debugger (f12) and go to "Elements"
Then right-click on the HTML tag and press edit as HTML
And then copy everything (ctrl + a)

Get generated source of an HTML page programmatically

What is the easiest way to get the generated web page of a website programatically in any programming language?
The generated web page that is required is the one you get if you go to a web page in firefox and press Ctrl-a and then right click and press "View Selection Source".
The one way that comes to mind is to understand the chromium open source web browser code and get the rendered page and use it in our service.
But I believe that there may be another solution out there that I am not aware of.
In javascript, you can get the full document content with
var html = document.documentElement.innerHTML;
If you want to do this server side you can use file_get_contents()
Ex:
file_get_contents(path_to_webpage);
For reference:
http://php.net/manual/en/function.file-get-contents.php
https://www.w3schools.com/php/func_filesystem_file_get_contents.asp

can view source be disabled by a website?

Is it possible I create a webpage that doesn't allow the website source to be displayed?
No.
From encrypt-html.com:
Almost all browsers provide a
convenient way to view the source code
of the currently opened page. We
regularly receive e-mails with the
same question - how to disable view
source command.
An html file can not enable or disable
the built in browser functionality in
most cases. It's not possible remove
the view-source command from the
browser menus or to make it
non-working. But if the source is
encrypted, what the user will see is
just a lot of garbage characters - not
your original code. So the view source
command is practically disabled for
each encrypted file.
No, you cannot hide the plain text HTML output of your web server.
How the HTML is generated is separate form the actual HTML that gets sent from the server.
This is the way the internet and world wide web were designed. If you are using a server-side scripted web application to generate your HTML, then your business intelligence / process / code is hidden, provided that people do not have access to browse the actual script file on your server.
If you would like to customize one of the open source browsers, like Firefox or Chrome, you could disable the "view source" functionality. It might be a worthwhile option for certain intranet or internal business applications. XUL and Firefox is one of the possibilities our company looked at to control what the end user could access. The only real security you have to keep your source secure is on the server side, as network / protocol monitors could still pull the HTML as it moves over the network.
You might use plugin based content, like a java applet, Flash, etc., to somewhat "hide" the real content. Of course, as eventually it will be displayed on screen, there is nothing to prevent a determined user to reverse engineer your page.
Here is an example of a site with "view source code" disabled in any browser: http://www.transelectrica.ro/StareSistem/protocoale/starea_sistemului.php The question is: HOW THEY DID IT?
i have use block methode disable right click but still can view source on chrome using metode tipe to address bar view-source:example.com
disable right click
<script type='text/javascript'>
function disableSelection(target){
if (typeof target.onselectstart!="undefined") //IE route
target.onselectstart=function(){return false}
else if (typeof target.style.MozUserSelect!="undefined") //Firefox route
target.style.MozUserSelect="none"
else //All other route (ie: Opera)
target.onmousedown=function(){return false}
target.style.cursor = "default"
}
</script>
<body oncontextmenu='return false;'>
<script type='text/javascript'>
disableSelection(document.body);

how to test html code of a big asp.net website

i want to test a big asp.net website through Visual studio. are it is possible. are i need any other tool. if yes then tell me the name of them.
my means testing to "HTML code". means there is no problem in html code and they written in standard way
If you have Firefox you install the HTML Validator or Total Validator add ons.
If you have Opera, just right-click your page in the browser and you will see something like "Validate source" on the menu.
Or else, you can use the validator at w3.org. There, you can validate HTML by URL, file upload or by direct input.
If your site is online visit http://www.htmlhelp.com/tools/validator/ and run a recursive validation