So the Mongoose.c library is pretty straight-forward. I've been able to use their event system, URL recognition, multi-form example, and their connection system to build a simple login-system. I've used C++ minGW, the mongoose.c&.h, and my browser. Now I'd like to implement images.
But there's a fundamental issue I can't get around. I can transfer EITHER an html document, OR an image. The jpg, alone, will display happily, as will the html document, so long as either is alone. My code is relatively simple, for html:
--pretend std::string HTMLAsString holds all html for the document.
mg_send_data(conn,HTMLAsString,strlen(HTMLAsString));
When I want to send an image, its quite similar:
while ((fread(buf, 1, sizeof(buf), fp)) > 0) {
mg_send_data(conn,buf,n);
}
mg_send_data(conn,"\r\n",2);
Both of these work (I've cut out the irrelevant parts like how the string is composed, or how the buffer is filled, suffice to say those aspects work). I can have HTML formatting, with a 'missing image space,' or I can have an image shown, but no HTML.
How do I send BOTH an image and HTML?
Mr. Andersen should get credit for this, but I can't mark a comment as an answer, and I want to close the question.
He was dead on. First the client-browser requests the page. The server sends it. When the client-browser receives the HTML document, it then sends requests to the server for all images/files as specified in the HTML.
I was checking all requests from clients for the addresses, using conn->uri. This allowed me to simply run string comparisons to figure out what page I was receiving data from. However, I wasn't checking for any OTHER strings apart from those I had pages for.
As soon as I put a simple:
std::cout << "REQUESTED:" << conn->uri << std::endl;
I saw the requests clear as day (in my case /image.jpg). So I put the aforementioned image code together with just another string comparison in the reply function, and presto-magico, images embedded within HTML, all playing nice and happy together.
Thank you for answering my question.
P.S. The send file code is a little different:
char buf[1024];
int n;
FILE *fp;
fp = fopen(cstrpath, "rb");
if(fp==NULL){printf("ERROR, NO %s found.",cstrpath);}
while ((n = fread(buf, 1, sizeof(buf), fp)) > 0) {
mg_send_data(conn,buf,n);
}
fclose(fp);
mg_send_data(conn,"\r\n",2);
Related
A web form was developed and instead of saving individual fields the whole form was saved in a text column as html. I need to extract data between various html tags so want to create a query that writes each set of tags to a table for me to then use - if this is possible please can someone advise how this can be achieved.
Thank you.
Ok, i've noticed you weren't happy with my last answer, however i am still certain you need server side code to handle SQL queries. Basically without having any sort of server side code you wanna do something like "here's a TV i bought, and here's the DVD release of the "John Wick" movie, i wanna watch it on this TV. Without a DVD player you can't really do it tho', that is the role of the PHP or ASP.NET in this case. Since I am not familiar with PHP, I am only able to show a solution in ASP.NET C# which i put together some time ago.
Here's how I solved this in a site i've built some time ago. It is not the cleanest, but it most certainly worked.
In ASP.NET you have the page file, which is similar to a HTML or an XML file, using a lot of pointy brakcets. Create one like this:
page file, body:
<asp:TextBox ID="HiddenTextBox" style="display: none;" runat="server"
onclick="OnStuff" OnTextChanged="TheUserChangedMe"
AutoPostBack="true"></asp:TextBox>
scroll up a bit, and in the section add some javascript, where you can handle on text change instantly. So as soon as something happens, like the user clicks on an image, or... well does anything you want him to do (and respond to that) you need the ID of that element he clicked on.
page file, head:
<script type="text/javascript">
function MarkItems_onclick() {
var Sender = window.event.srcElement;
document.getElementById('<%= HiddenTextBox.ClientID%>').value = Sender.id;
__doPostBack('<%= HiddenTextBox.ClientID%>', 'TextChanged');
}
</script>
page's .cs file, the C# code behind
//
//add these on top:
//
using System.Configuration;
using System.Data.SqlClient;
using System.Data;
//
// later somewhere write this:
//
protected void TheUserChangeMe(object sender, EventArgs e)
{
SqlConnection conn1 = new SqlConnection(ConfigurationManager.ConnectionStrings["UserRegConnectionString"].ConnectionString);
//
Note: the connection string has to be set up earlier. make sure you make one, visual studio will let you do that in no time. Do not forget, that this connection string defines where your DB is located, and grants the required information for the site how to even reach it at the first place.
//
// somewhere you need to read out what you have in your HiddenTextbox:
//
String stringToProcess = HiddenTextBox.Text;
process your stuff, here i assume you cut it up accordingly and you will have an insertQ variable with a proper syntax. in order to add these values it should look something like this:
String insertQ = "insert into OrdersTable(OrderId, Type, Quantity) " +
"values (#OrderId, #StuffType, #StuffQuantity)";
How to access the database in asp.net C#:
conn1.Open();
SqlCommand insertComm = new SqlCommand(insertQ, conn1);
insertComm.Parameters.AddWithValue("#OrderId", nonStringVariable.ToString());
insertComm.Parameters.AddWithValue("#StuffType", aStringVariable);
insertComm.Parameters.AddWithValue("#StuffQuantity", "some random text");
insertComm.ExecuteNonQuery();
conn1.Close();
that's pretty much it. Your SQL database will have 3 fields filled up every time this function runs. It's a bit messy, but for my site it was crucial to handle any onclick event, and with this you can flush out 23 checkboxes, 10 pictures and whatnot as page elements, yet you'll know what happened every time the user clicked something.
i'm not a professional either, however i think you're gonna need something on the server side to process this query, like asp.net or php. Basically the server side code would have no problem generating your page's content according to what comes back from the DB.
I'm trying to write an application in c++ using Qt 5.7, basically it should be a websocket server, using qwebsocket for this, capable to send an image elaborated with opencv to an HTML client. What I'm trying to do is encode the image in base64, transmit and on the client put the encoded string in the src of an image tag.
Just to test, I can send/receive text messages correctly, so the websocket architecture is working, but I have some problems with images. This is my code snippets:
Server
cv::Mat imgIn;
imgIn = cv::imread("/home/me/color.png",CV_LOAD_IMAGE_COLOR);
QByteArray Img((char*)(imgIn.data),imgIn.total()*imgIn.elemSize());
QByteArray Img64 = Img.toBase64();
pClient->sendBinaryMessage(Img64);
Client
<img id="ItemPreview" src="" style="border:5px solid black" />
....
websocket.binaryType = "arraybuffer";
websocket.onmessage = function (evt) {
console.log( "Message received :", evt.data );
document.getElementById("ItemPreview").src = "data:image/png;base64," + evt.data;
};
I think most of the problems are in the Server, because the base64 sequence I got from the image is different from the one I can get from online converter image/base64.
On the client I receive this error in the console and nothing is showed:
data:image/png;base64,[object ArrayBuffer]:1 GET
data:image/png;base64,[object ArrayBuffer] net::ERR_INVALID_URL
Any hints?
SOLUTION
Thanks to the suggestions, I can provide the working code:
Server
imgIn = cv::imread("/home/me/color.png", CV_LOAD_IMAGE_UNCHANGED);
std::vector<uchar> buffer;
cv::imencode(".png",imgIn,buffer);
std::string s = base64_encode(buffer.data(),buffer.size());
pClient->sendTextMessage(QString::fromStdString(s));
Client
Removed this line:
websocket.binaryType = "arraybuffer";
The base64 encoding in the server is done using this code:
Encode/Decode base64
This line in the server:
imgIn = cv::imread("/home/me/color.png",CV_LOAD_IMAGE_COLOR);
decodes a PNG formatted image, and places it in memory as load of pixel data (plus possibly some row padding, which you don't take account of, see below). That's what you're base64 encoding.
This line in the client:
document.getElementById("ItemPreview").src = "data:image/png;base64," + evt.data;
is expecting a PNG image, but that isn't what you're sending; you've just pushed out a load of raw pixel data, with no dimensions or stride or format information or anything else.
If your client wants a PNG, you're going to have to use something like imencode to write PNG data to a memory buffer, and base64 encode that instead.
One other important thing to note is that decoded images may have row padding... a few bytes on the end of each row for memory alignment purposes. Therefore, the actual length of each image row may exceed the width of the image multiplied by the size of the each pixel in bytes. That means that this operation:
QByteArray Img((char*)(imgIn.data),imgIn.total()*imgIn.elemSize());
may not, in fact, wrap the entire image buffer in your QByteArray. There are various ways to check the stride/step of an image, but you'd best read the cv::Mat docs as it isn't worth me repeating them all here. This only matters if you're doing raw byte-level image manipulation, like you are here. If you use imencode, you don't need to worry about this.
I have a class in my application that displays HTML-based documentation that is stored as a set of HTML files on the user's hard drive.
My documentation module has a feature that allows it to remember the most recently-viewed page. I'm currently using QWebView::url() to get the URL of the current page so I can store it in the config. The next time the documentation viewer is activated, the URL is pulled from the config, processed appropriately, and is then sent back to the QWebView. That way, the user can pick up where he/she left off.
QWebView::url is a good way to get the current URL, except it isn't as precise as I need it to be. It only captures the base URL without any of the extras. For instance, every heading in my documentation has an id attribute attached to it, and I used this to make a browsable table of contents. For instance, the user can click any item in the TOC to jump to the appropriate heading.
However, QWebKit::url only returns something like this: file:///Z:/doc/foo.html when I need file:///Z:/doc/foo.html#heading, where #heading is the last item clicked in the TOC. How can I get it to include those internal links (not quite sure what the proper name for them is) in the URL string?
In a perfect world, QWebView would automatically know when the user scrolls past each heading in the current document and would update the URL string automatically. This would allow for nearly seamless reading between sessions. I don't expect it to work that way, but is this even possible?
This works like a charm for me.
main.cpp
#include <QWebView>
#include <QDebug>
#include <QApplication>
int main(int argc, char **argv)
{
QApplication application(argc, argv);
QWebView view;
view.load(QUrl("http://qt-project.org/doc/qt-5/QWebView.html"));
view.show();
QObject::connect(&view, &QWebView::urlChanged, [&view]() {
qDebug() << "Url being viewed:" << view.url().toString();
});
return application.exec();
}
main.pro
TEMPLATE = app
TARGET = main
QT += widgets webkit webkitwidgets
CONFIG += c++11
SOURCES += main.cpp
Build and Run
qmake && make && main
Then try to click for instance on the details of the right side.
Output
Changed url being viewed: "http://qt-project.org/doc/qt-5/QWebView.html"
Changed url being viewed: "http://qt-project.org/doc/qt-5/QWebView.html#details"
Changed url being viewed: "http://qt-project.org/doc/qt-5/QWebView.html#details"
Even though the signal is emitted twice, you can see that the url accessor method returns the correct url.
I have an email which contains perfectly formatted html with the single exception that images are linked differently: <img width=456 height=384 id="_x0000_i1026" src="cid:X.MA2.1374935634#aol.com" alt="cid:X.MA4.1372453963#aol.com"> the email has other parts including the image with this content id. The problem is that I dont know how to point the QWebview to the data (which I have). Is there a way to add the image to its cache?
It's possible but not easy.
Basically you need to:
1- provide your own QNetworkAccessManager-inherited class, overriding createRequest() to catch these links refering to "cid":
QNetworkReply*
MyManager::createRequest (Operation op,
const QNetworkRequest & req,
QIODevice * outgoingData = 0)
{
if (op==GetOperation && req.url().scheme() == "cid")
return MyNetworkReply(req.url().path());
else
return QNetworkAccessManager::createRequest(op, req, outgoingData);
}
2- Connect it to the webview with:
MyManager* manager = new MyManager;
view->page()->setNetworkAccessManager(manager);
3- Provide an implementation of MyNetworkReply which inherits from QNetworkReply, a QIODevice-class. And this is the complicated part. You need to provide at least readData(), bytesAvailable(), a constructor that sets up the reply in terms of HTTP headers, and launches the actual asynchronous read with QTimer::singleShot()
4- Decode the attachment (probably from base64 if it's a picture) into a QByteArray for your MyNetworkReply::readData() to read from that.
There's a complete example on qt.gitorious.org written by Qt Labs developers in the Qt 4.6 days. They display an internally generated PNG, not an external mail attachment, but the general steps are as described above. See:
http://qt.gitorious.org/qt-labs/graphics-dojo/blobs/master/url-rendering/main.cpp
However this code has a flaw with Qt-4.8. in the constructor for RendererReply, when it does:
open(ReadOnly|Unbuffered);
this should be:
open(ReadOnly);
otherwise webkit never reads the entire data and displays the broken picture icon.
I have a textarea in my page that is a HTML input field. The intention is to allow the user to register a confirmation HTML that will be shown in their users' browser after a certain action is taken place. You can imagine it as the confirmation of paypal after you pay something and it redirects you to a website that says "Thanks for your purchase". This is already implemented alright, but now I'm thinking about the user's security(XSS/SQL Injection).
What I want to know is how to filter out certain html tags such as <script> <embed> <object> safely inside my controller post action, so if I detect that there is a malicious html inside the HTML, I'll stop execution before saving. Right now I am doing like this:
[CustomHandleError]
[HttpPost]
[ValidateAntiForgeryToken]
[AccessDeniedAuthorize(Roles = "Admin,CreateMerchant")]
public ActionResult Create(MerchantDTO merchantModel)
{
if (ModelState.IsValid)
{
if (!IsSafeConfirmationHtml(merchantModel.ConfirmationHtml))
{
ModelState.AddModelError("ConfirmationHtml", "Unallowed HTML tags inputted");
return View("Create", merchantModel);
}
.
.
.
}
}
and my IsSafeConfirmationHTML is defined as
private bool IsSafeConfirmationHtml(string html)
{
if (html.ToLower().Contains("<script") || html.ToLower().Contains("<embed") || html.ToLower().Contains("<object"))
{
return false;
}
return true;
}
Is there a smarter, cleaner way to do this? I mean, I don't want to get false positives blocking the words "object", "script", etc, but I also don't want to be fooled by encodings that translate "<" to "%3C" or such...
Ontopic: does spacing inside tags works? Example: < script > alert("1"); < / script >?
So one thing you could do to defeat the encoding attack would be to run UrlDecode and HtmlDecode (html decode is probably superfluous, but it depends on what you do with the script) on it.
Another thing to speed up your checking would be to turn to a precompiled regex.
private static Regex disallowedHtml = new Regex(#"script|embed|object",
RegexOptions.IgnoreCase);
private bool IsSafeConfirmationHtml(string html)
{
Match match = disallowedHtml.Match(html);
return !match.success;
}
The static Regex instance cuts out most of the overhead of regex's for every run but the first one, making the regex match much faster than running 3 separate contains. You could make the regex complex enough to search for opening angle brackets, html entities and url encoded chars, match any whitespace between those chars and the actual tag name etc. etc. The Microsoft regex info has gotten quite good over the years.
I still wouldn't say this makes you 100% safe from a user (uploader? customer? the right word depends on what your business model is) running an XSS or injection attack against visitors to your site. They could point to an image or a css file that returns as mime-type x-application, or some such. And HTML is changing pretty rapidly these days. The best way to guarantee against that is to have a human involved in an approval process as well, but humans make mistakes and computers can be fooled, and there's no law that says those two events can't happen at the same time. But you are right to put some safeguards in place.