C++ Windows Sockets: Downloading an html file [closed] - html

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Given:
Suppose that I have a website called "exampledomain.com", and that on that website, I have one file called "my_doc.html", the full URL address of which is "https://www.exampledomain.com/my_directory/my_doc.html". (Not my actual website; this is just hypothetical).
Objecive:
I'm trying to develop a Client-Side Application, using C++ & Windows Sockets, that downloads my HTML file, parses it, extracts some specific information, runs some calculations, and displays its results to the user.
Question:
How do I download the HTML file from the server to the directory "C:/ExampleDirectory/" on the client-side computer, using the Windows Sockets Library?*
Clarification:
I want to write this Client-side program to work with the existing website. IE: I want it to download the file in the same way that an Internet-Browser like Microsoft Edge would.
Edit:
Just to clarify, the server uses a secure, account-based system, and thus the document would be transferred using HTTPS. I'm not really sure if this would effect the solution, but I thought it'd be worth mentioning.

Don't.
A socket library is not an appropriate tool to talk with a web-server. http is complex enough that you want to use a specialized http library. There are several such libraries available. curllib springs to mind. And of course there is the WinHttp tag https://stackoverflow.com/tags/winhttp/info.
And for the html part, you'd want to use an html parsing library to extract the desired info.

Related

How can I call different web pages with query strings? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 months ago.
Improve this question
I have the following problem. I downloaded a web page with the wayback machine downloader. The wayback machine downloader saves the query strings with %3f because question marks are not allowed in Windows Explorer. For example, when I go to the splash.aspx?page=3 page, nothing happens except that for each value after the ?, the same page appears. How can I assign different pages with the query strings?
Btw: I use IIS for hosting.
The problem is that the downloader creates a static copy of the site. There won't be any server-side processing. Multi-page processing requires that.
Assuming that the wayback machine actually iterated through all the pages, you will need a different downloading tool that can find them all, creating static versions of each page and rewriting the links for each, since the wayback machine downloader doesn't know how to do paging itself.
But really, stepping back, I think what you're trying to do is the problem. Wayback Machine is for creating snapshots of sites at a point in time. It's goal is not to backup and restore the backend functionality. (Which it can't do, even if it wanted to, since it doesn't have access to the backend of every site on the internet.)
You didn't specify what your final end goal is, but my guess is that while the wayback machine might be able to be used to scrape the data, you'll have to write your own server code and website if you want to redeliver it again. (And assuming you have the rights to do so.)

How to upload a website without an html files [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
H!
I have created a website, where all the files are of the type CSS, js, pug, and when I want to publish the site, I need to give an index.html file from which the site will start. The problem is that I do not have such a file.
Does anyone know how to deal with such a problem?
And in addition, I started the site by running it in localhost: 3000 does anyone know how to start it now so that it will work when I upload it.
Thanks in advance to all the helpers.
Your mention of localhost:3000 implies that you have written a website which depends on Node.js for server-side code (at a minimum this will involve the translation of your Pug templates into HTML on demand).
There are two general approaches you can take to solve this problem:
Find hosting which supports your server-side code and deploy your Node.js application to it. (This will not be typical static or shared hosting).
Generate static HTML documents from your application and upload those HTML documents. (The specifics will depend on exactly what your server-side implementation does and will probably be a significant amount of work. Typically if you wanted to take this approach, you would have used a framework designed to output static sites from the outset).
Obviously if you have your server-side code processing user input (such as form submissions) option 2 will not work.

How to run .cpp application on .htm file? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I've made a webpage on HTML and I want to run a .cpp application on it. With the way I've learnt to do it, the code is displayed.
The only way to do this is ActiveX, which by default is not supported by anything anymore. Only Internet Explorer supports it, but even that needs to be specifically allowed.
But you'd still have to first compile the cpp-code and do quite a huge amount of programming work before you'd have an valid ActiveX -dll. Then you'd also somehow need to deploy it for all website clients.
TL;DR: No, no no no. Running C/C++ for web clients is no-go.
However, if you are looking for something like that website client should be able to invoke a C++ application at the server, this is very possible. You still need to have that application compiled for the server environment though. For small "run and get the results" -tasks I've found it easiest to use ajax to call php -scripts, as php can execute stuff on server.
Signed Java Applets can run executables from browser, but it's not welcomed nowadays.

What technolgies and protocols would one use to allow a c program to start and interact with a web server? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Here is a simplifed example.
I have a c program that runs as a daemon on an embedded linux system. For example we will say the program is a calculator that just does addition.
When someone starts the program, I would like a web server to be launched on the system that allows people to remotely use the calculator. The webserver will just serve a simple html page with 1 button "solve" and two input boxes. When someone clicks solve the numbers in the text box need to be sent to the c program, and then the solution needs to be sent back to the web server and displayed on the website.
I hope this isn't overly broad, but I'm just looking for what technologies would be used to accomplish this and a brief overview of how they interact, and hopefully I can take it from there and start digging in.
You don't need to start an external web server. Since your app is a deamon, you could use some HTTP server library inside your application -i.e. have an embedded HTTP server thru that library, e.g. D.Moreno's libonion, GNU libmicrohttpd, EHS, Mongoose etc..
If you already have an external web server, you could configure it to proxy your internal application web service, or make your application a FastCgi (or maybe SCGI) server.
PS. You need to be familiar with HTML5, HTTP, POST request of HTTP, ...
As the interaction is taken place between processes, I think you need interprocess communication mechanism here. However, you may not allowed to change the code of webserver. Here's some thing may help you:
Use database (mysql) which you can use sql to insert/fetch data both sides.
If you use Php or someother script language, try Sockets

When does a web app need a Gmail-style loading page? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
When does a single page web app need a loading page in the style of Gmail or RTM or the many others? Should there be a size at which I introduce one? Is it just about time?
And, is it just loading in the JS, CSS files etc, or is it doing processing too?
Also, as an aside, how would I even go about introducing such a page? Are there plugins/guides, etc?
Thanks!
It is a complex question. as many times the best answer is "it depends".
I think it is not about size but simply about how user experience you want to offer. More richest and dynamic content more rich your client must be.
So if you make many things dynamically using JS at client side, like gmail,where UI never freeze, the calls are asynchonous and content refresh is made by JS, you can arrive to have an architecture where server offers an API and client side contains more business logic.
The basic idea is to have a HTML file, with some CSS and JS code responsible to load or send data from/to server and update the UI.
This is different from the "traditional" model whre client request a server page. The server proces the request, generates a HTML (plus CSS+ JS) and returns to client. Then any click on a button generates a new request that returns a new page. etc.
I suggest you to take a look to Dojo toolkit.
Programming in the gmail way can produce lost of JS files and really big HTML files. Dojo simplifies that a lot and also manages modules. This way the client side code is not loaded once when the HTML page is loaded, but it manages which "modules" you need and load it when needed.
Hope this can clarify a bit.