Writing a full website to socket with microncontroller - html

I'm using a web server to control devices in the house with a microcontroller running .netMF (netduino plus 2). The code below writes a simple html page to a device that connects to the microcontroller over the internet.
while (true)
{
Socket clientSocket = listenerSocket.Accept();
bool dataReady = clientSocket.Poll(5000000, SelectMode.SelectRead);
if (dataReady && clientSocket.Available > 0)
{
byte[] buffer = new byte[clientSocket.Available];
int bytesRead = clientSocket.Receive(buffer);
string request =
new string(System.Text.Encoding.UTF8.GetChars(buffer));
if (request.IndexOf("ON") >= 0)
{
outD7.Write(true);
}
else if (request.IndexOf("OFF") >= 0)
{
outD7.Write(false);
}
string statusText = "Light is " + (outD7.Read() ? "ON" : "OFF") + ".";
string response = WebPage.startHTML(statusText, ip);
clientSocket.Send(System.Text.Encoding.UTF8.GetBytes(response));
}
clientSocket.Close();
}
public static string startHTML(string ledStatus, string ip)
{
string code = "<html><head><title>Netduino Home Automation</title></head><body> <div class=\"status\"><p>" + ledStatus + " </p></div> <div class=\"switch\"><p>On</p><p>Off</p></div></body></html>";
return code;
}
This works great, so I wrote a full jquery mobile website to use instead of the simple html. This website is stored on the SD card of the device and using the code below, should write the full website in place of the simple html above.
However, my problem is the netduino only writes the single HTML page to the browser, with none of the JS/CSS style files that are referenced in the HTML. How can I make sure the browser reads all of these files, as a full website?
The code I wrote to read the website from the SD is:
private static string getWebsite()
{
try
{
using (StreamReader reader = new StreamReader(#"\SD\index.html"))
{
text = reader.ReadToEnd();
}
}
catch (Exception e)
{
throw new Exception("Failed to read " + e.Message);
}
return text;
}
I replaced string code = " etc bit with
string code = getWebsite();

How can I make sure the browser reads all of these files, as a full website?
Isn't it already? Use an HTTP debugging tool like Fiddler. As I read from your code, your listenerSocket is supposed to listen on port 80. Your browser will first retrieve the results of the getWebsite call and parse the HTML.
Then it'll fire more requests, as it finds CSS and JS references in your HTML (none shown). These requests will, as far as we can see from your code, again receive the results of the getWebsite call.
You'll need to parse the incoming HTTP request to see what resource is being requested. It'll become a lot easier if the .NET implementation you run supports the HttpListener class (and it seems to).

Related

search embeded webpage source in vb.net

I wrote a program that includes an embedded web browser that loads a website which have a changing part (the part changes about 2 times a week and it have no regular timing pattern) that I want to search for a particular part in the opened webpage source code after refreshing the webpage in a specified time interval.
I found many things similar to my question but this is what I want and those questions doesn't have:
search embedded webpage source (they searching the webpage without embedding, and I had to embed it because I had to login before I see the particular page)
so this is the procedure I'm trying to do:
1- open a website in embedded web browser
2- after user logged in, with a press of button in program, it hides the embedded
web browser and start to refresh the page in a time interval (like
every minute) and search if the particular code changed in the source of
that opened webpage
any other/better Ideas appreciated
thanks
Many years ago I wrote an app to reintegrate forum posts from several pages into one and I struggled with the login issue too and thought it was only possible using an embedded browser. As it turns out, it's possible to use System.Net in .NET to handle web pages that need a login as you can pull the cookies out and keep them on hand. I would suggest you do that and move away from the embedded browser.
Unfortunately I wrote the code in C# originally, but as it's .NET and is mostly classes-based, it shouldn't be too difficult to port over.
The Basic Principle
Find out what information is included in the POST when you login, which you can do in Chrome with developer mode on (F12). Convert that to a byteArray, POST it to the page, store the cookies and make another call with the cookie data later on. You will need a class variable to hold the cookies.
Code:
private void Login()
{
byte[] byteArray = Encoding.UTF8.GetBytes("username=" + username + "&password=" + password + "&autologin=on&login=Log+in"); // Found by investigation
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("yourURL");
request.AllowAutoRedirect = false;
request.CookieContainer = new CookieContainer();
request.Method = "POST";
request.ContentLength = byteArray.Length;
request.ContentType = "application/x-www-form-urlencoded";
Stream dataStream = request.GetRequestStream();
dataStream.Write(byteArray, 0, byteArray.Length);
dataStream.Close();
WebResponse response = request.GetResponse();
if (((HttpWebResponse)response).StatusCode == HttpStatusCode.Found)
{
// Well done, your login has been accepted
loginDone = true;
cookies = request.CookieContainer;
}
else
{
// If at first you don't succeed...
}
response.Close();
}
private string GetResponseHTML(string url)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.AllowAutoRedirect = false;
// Add cookies from Login()
request.CookieContainer = cookies;
request.ContentType = "application/x-www-form-urlencoded";
WebResponse response = request.GetResponse();
string sResponse = "";
StreamReader reader = null;
if (((HttpWebResponse)response).StatusCode == HttpStatusCode.OK)
{
reader = new StreamReader(response.GetResponseStream());
sResponse = reader.ReadToEnd();
reader.Close();
}
response.Close();
return sResponse;
}
Hope that helps.
I had to change to C# and I found what I was looking for:
string webPageSource = webBrowser1.DocumentText;
That gave me the source of web page opened in webBrowser1 control.

Getting WP8 web requests to be synchronous

I am trying to port some code from a Windows form application to WP8, and have run into some issues regarding asynchronous calls.
The basic idea is to do some UAG authentication. In the Windows form code, I do a GET on the portal homepage and wait for the cookies. I then pass these cookies into a POST request to the validation URL the UAG server. It all works fine in the form, since all the steps are sequential and synchronous.
Now, when I started porting this to WP8, first thing I noticed was that GetResponse() wasn't available, instead I had to use BeginGetResponse(), which is asynchronous and calls a callback function. This is no good for me, since I need to ensure this step finishes before I do the POST
My Windows form code looks like this (taken from http://usingnat.net/sharepoint/2011/2/23/how-to-programmatically-authenticate-to-uag-protected-sharep.html):
private void Connect()
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(this.Url);
request.CookieContainer = new CookieContainer();
request.UserAgent = this.UserAgent;
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
//Get the UAG generated cookies from the response
this.Cookies = response.Cookies;
}
}
private void ValidateCredentials()
{
//Some code to construct the headers goes here...
HttpWebRequest postRequest = (HttpWebRequest)WebRequest.Create(this.ValidationUrl);
postRequest.ContentType = "application/x-www-form-urlencoded";
postRequest.CookieContainer = new CookieContainer();
foreach (Cookie cookie in this.Cookies)
{
postRequest.CookieContainer.Add(cookie);
}
postRequest.Method = "POST";
postRequest.AllowAutoRedirect = true;
using (Stream newStream = postRequest.GetRequestStream())
{
newStream.Write(data, 0, data.Length);
}
using (HttpWebResponse response = (HttpWebResponse)postRequest.GetResponse())
{
this.Cookies = response.Cookies;
}
public CookieCollection Authenticate()
{
this.Connect();
this.ValidateCredentials();
return this.Cookies;
}
The thing is this code relies on synchronous operation (first call Connect(), then ValidateCredentials() ), and it seems WP8 does not support that for Web requests. I could combine the two functions into one, but that won't solve my problem fully since later on this needs to be expanded to access resources behind the UAG, so it would need a modular design.
Is there a way to "force" synchronization?
Thanks
You can still continue your steps in the call back function using the asynchronous model. Or you can use the new HttpClient which can be used with the await keyword so you can program your stuff in a synchronous way.
You can get HttpClient through nuget
install-package Microsoft.Net.Http

Load full Website WinRT

i want to load the Kepler reference Page with HttpClient like this:
string resourceAddress = _url;
HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, resourceAddress);
HttpClient httpClient = new HttpClient();
// Do not buffer the response:
HttpResponseMessage response = new HttpResponseMessage();
response = await httpClient.SendAsync(request,
HttpCompletionOption.ResponseContentRead);
using (Stream responseStream = await response.Content.ReadAsStreamAsync())
{
int read = 0;
byte[] responseBytes = new byte[(Int32)responseStream.Length];
do
{
read = await responseStream.ReadAsync(responseBytes, 0, responseBytes.Length);
} while (read != 0);
}
But i think, the Page won´t be loaded complete, like without all images and iframes etc...
Downloading just the first piece of html is rarely going to be enough to give you all the elements of the page, even if you parse it and include all the linked images etc. There is also css and javascript that will bring new content into view when you open a page in a browser and getting all this yourself is going to be an effort similar to implementing your own browser. Your best bet would be to either just load the page once in a WebView control and let it cache its content or use a WebView and scan the DOM to try to get all the elements. You could also write a web service that would download the page for you and just deliver you the whole package... assuming that the page doesn't require authentication.

Is it possible to embed images in e-mail message in Sharepoint?

Currently I'm sending E-Mail messages by SPUtility.SendMail. I'd like to embed images into my message so i can give it a little bit style (div backgrounds, logo images etc.).
Is this possible?
P.S. I can't give direct URL addresses to image SRCs because the resources are located in a site which belongs to a private network which requires authentication for accessing to the files.
Edit:
I did some research before asking here, ofcourse the first thing i encountered was the System.Net.Mail (do you know that there is a whole web site devoted to it). But the Sharepoint Deployment team in my client's company has some strict rules about custom coding. They have coding guide lines and everything. I'm trying to stick with the SP SDK as hard as i can.
The most straighforward way for me has been through using System.Net.Mail, since you can inline your own content.
Here's a sample usage
using (MailMessage msg = new MailMessage("fromaddress", "toaddress"))
{
msg.Subject = "subject";
msg.Body = "content";
msg.IsBodyHtml = true;
SmtpClient smtp = new SmtpClient("smtp server name");
smtp.Send(msg);
}
Same concept applies to using SPUtility.SendMail (aside from the fact that you'll need a reference to your SPWeb:
From http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.utilities.sputility.sendemail.aspx
try
{
SPWeb thisWeb = SPControl.GetContextWeb(Context);
string toField = "someone#microsoft.com";
string subject = "Test Message";
string body = "Message sent from SharePoint";
bool success = SPUtility.SendEmail(thisWeb,true, false, toField, subject, body);
}
catch (Exception ex)
{
// handle exception
}
The second boolean parameter in SendMail is false to disable HTML encoding, so you can use your <img > and <div > tags in the message body.

Calling wkhtmltopdf to generate PDF from HTML

I'm attempting to create a PDF file from an HTML file. After looking around a little I've found: wkhtmltopdf to be perfect. I need to call this .exe from the ASP.NET server. I've attempted:
Process p = new Process();
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = HttpContext.Current.Server.MapPath("wkhtmltopdf.exe");
p.StartInfo.Arguments = "TestPDF.htm TestPDF.pdf";
p.Start();
p.WaitForExit();
With no success of any files being created on the server. Can anyone give me a pointer in the right direction? I put the wkhtmltopdf.exe file at the top level directory of the site. Is there anywhere else it should be held?
Edit: If anyone has better solutions to dynamically create pdf files from html, please let me know.
Update:
My answer below, creates the pdf file on the disk. I then streamed that file to the users browser as a download. Consider using something like Hath's answer below to get wkhtml2pdf to output to a stream instead and then send that directly to the user - that will bypass lots of issues with file permissions etc.
My original answer:
Make sure you've specified an output path for the PDF that is writeable by the ASP.NET process of IIS running on your server (usually NETWORK_SERVICE I think).
Mine looks like this (and it works):
/// <summary>
/// Convert Html page at a given URL to a PDF file using open-source tool wkhtml2pdf
/// </summary>
/// <param name="Url"></param>
/// <param name="outputFilename"></param>
/// <returns></returns>
public static bool HtmlToPdf(string Url, string outputFilename)
{
// assemble destination PDF file name
string filename = ConfigurationManager.AppSettings["ExportFilePath"] + "\\" + outputFilename + ".pdf";
// get proj no for header
Project project = new Project(int.Parse(outputFilename));
var p = new System.Diagnostics.Process();
p.StartInfo.FileName = ConfigurationManager.AppSettings["HtmlToPdfExePath"];
string switches = "--print-media-type ";
switches += "--margin-top 4mm --margin-bottom 4mm --margin-right 0mm --margin-left 0mm ";
switches += "--page-size A4 ";
switches += "--no-background ";
switches += "--redirect-delay 100";
p.StartInfo.Arguments = switches + " " + Url + " " + filename;
p.StartInfo.UseShellExecute = false; // needs to be false in order to redirect output
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true; // redirect all 3, as it should be all 3 or none
p.StartInfo.WorkingDirectory = StripFilenameFromFullPath(p.StartInfo.FileName);
p.Start();
// read the output here...
string output = p.StandardOutput.ReadToEnd();
// ...then wait n milliseconds for exit (as after exit, it can't read the output)
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
// if 0 or 2, it worked (not sure about other values, I want a better way to confirm this)
return (returnCode == 0 || returnCode == 2);
}
I had the same problem when i tried using msmq with a windows service but it was very slow for some reason. (the process part).
This is what finally worked:
private void DoDownload()
{
var url = Request.Url.GetLeftPart(UriPartial.Authority) + "/CPCDownload.aspx?IsPDF=False?UserID=" + this.CurrentUser.UserID.ToString();
var file = WKHtmlToPdf(url);
if (file != null)
{
Response.ContentType = "Application/pdf";
Response.BinaryWrite(file);
Response.End();
}
}
public byte[] WKHtmlToPdf(string url)
{
var fileName = " - ";
var wkhtmlDir = "C:\\Program Files\\wkhtmltopdf\\";
var wkhtml = "C:\\Program Files\\wkhtmltopdf\\wkhtmltopdf.exe";
var p = new Process();
p.StartInfo.CreateNoWindow = true;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = wkhtml;
p.StartInfo.WorkingDirectory = wkhtmlDir;
string switches = "";
switches += "--print-media-type ";
switches += "--margin-top 10mm --margin-bottom 10mm --margin-right 10mm --margin-left 10mm ";
switches += "--page-size Letter ";
p.StartInfo.Arguments = switches + " " + url + " " + fileName;
p.Start();
//read output
byte[] buffer = new byte[32768];
byte[] file;
using(var ms = new MemoryStream())
{
while(true)
{
int read = p.StandardOutput.BaseStream.Read(buffer, 0,buffer.Length);
if(read <=0)
{
break;
}
ms.Write(buffer, 0, read);
}
file = ms.ToArray();
}
// wait or exit
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
return returnCode == 0 ? file : null;
}
Thanks Graham Ambrose and everyone else.
OK, so this is an old question, but an excellent one. And since I did not find a good answer, I made my own :) Also, I've posted this super simple project to GitHub.
Here is some sample code:
var pdfData = HtmlToXConverter.ConvertToPdf("<h1>SOO COOL!</h1>");
Here are some key points:
No P/Invoke
No creating of a new process
No file-system (all in RAM)
Native .NET DLL with intellisense, etc.
Ability to generate a PDF or PNG (HtmlToXConverter.ConvertToPng)
Check out the C# wrapper library (using P/Invoke) for the wkhtmltopdf library: https://github.com/pruiz/WkHtmlToXSharp
There are many reason why this is generally a bad idea. How are you going to control the executables that get spawned off but end up living on in memory if there is a crash? What about denial-of-service attacks, or if something malicious gets into TestPDF.htm?
My understanding is that the ASP.NET user account will not have the rights to logon locally. It also needs to have the correct file permissions to access the executable and to write to the file system. You need to edit the local security policy and let the ASP.NET user account (maybe ASPNET) logon locally (it may be in the deny list by default). Then you need to edit the permissions on the NTFS filesystem for the other files. If you are in a shared hosting environment it may be impossible to apply the configuration you need.
The best way to use an external executable like this is to queue jobs from the ASP.NET code and have some sort of service monitor the queue. If you do this you will protect yourself from all sorts of bad things happening. The maintenance issues with changing the user account are not worth the effort in my opinion, and whilst setting up a service or scheduled job is a pain, its just a better design. The ASP.NET page should poll a result queue for the output and you can present the user with a wait page. This is acceptable in most cases.
You can tell wkhtmltopdf to send it's output to sout by specifying "-" as the output file.
You can then read the output from the process into the response stream and avoid the permissions issues with writing to the file system.
My take on this with 2018 stuff.
I am using async. I am streaming to and from wkhtmltopdf. I created a new StreamWriter because wkhtmltopdf is expecting utf-8 by default but it is set to something else when the process starts.
I didn't include a lot of arguments since those varies from user to user. You can add what you need using additionalArgs.
I removed p.WaitForExit(...) since I wasn't handling if it fails and it would hang anyway on await tStandardOutput. If timeout is needed, then you would have to call Wait(...) on the different tasks with a cancellationtoken or timeout and handle accordingly.
public async Task<byte[]> GeneratePdf(string html, string additionalArgs)
{
ProcessStartInfo psi = new ProcessStartInfo
{
FileName = #"C:\Program Files\wkhtmltopdf\wkhtmltopdf.exe",
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
Arguments = "-q -n " + additionalArgs + " - -";
};
using (var p = Process.Start(psi))
using (var pdfSream = new MemoryStream())
using (var utf8Writer = new StreamWriter(p.StandardInput.BaseStream,
Encoding.UTF8))
{
await utf8Writer.WriteAsync(html);
utf8Writer.Close();
var tStdOut = p.StandardOutput.BaseStream.CopyToAsync(pdfSream);
var tStdError = p.StandardError.ReadToEndAsync();
await tStandardOutput;
string errors = await tStandardError;
if (!string.IsNullOrEmpty(errors)) { /* deal/log with errors */ }
return pdfSream.ToArray();
}
}
Things I haven't included in there but could be useful if you have images, css or other stuff that wkhtmltopdf will have to load when rendering the html page:
you can pass the authentication cookie using --cookie
in the header of the html page, you can set the base tag with href pointing to the server and wkhtmltopdf will use that if need be
Thanks for the question / answer / all the comments above. I came upon this when I was writing my own C# wrapper for WKHTMLtoPDF and it answered a couple of the problems I had. I ended up writing about this in a blog post - which also contains my wrapper (you'll no doubt see the "inspiration" from the entries above seeping into my code...)
Making PDFs from HTML in C# using WKHTMLtoPDF
Thanks again guys!
The ASP .Net process probably doesn't have write access to the directory.
Try telling it to write to %TEMP%, and see if it works.
Also, make your ASP .Net page echo the process's stdout and stderr, and check for error messages.
Generally return code =0 is coming if the pdf file is created properly and correctly.If it's not created then the value is in -ve range.
using System;
using System.Diagnostics;
using System.Web;
public partial class pdftest : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
private void fn_test()
{
try
{
string url = HttpContext.Current.Request.Url.AbsoluteUri;
Response.Write(url);
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName =
#"C:\PROGRA~1\WKHTML~1\wkhtmltopdf.exe";//"wkhtmltopdf.exe";
startInfo.Arguments = url + #" C:\test"
+ Guid.NewGuid().ToString() + ".pdf";
Process.Start(startInfo);
}
catch (Exception ex)
{
string xx = ex.Message.ToString();
Response.Write("<br>" + xx);
}
}
protected void btn_test_Click(object sender, EventArgs e)
{
fn_test();
}
}