We're trying to have the web page served by the Arduino update without having to refresh the page. Our current code is below. Right now the page is refreshing as fast as possible (about once a second), but we'd like to have the data update without having to refresh. Is there a way to do this with html?
Thanks for your help!
void loop() {
WiFiClient client = server.available(); // listen for incoming clients
if (client) { // if you get a client,
Serial.println("new client"); // print a message out the serial port
String currentLine = ""; // make a String to hold incoming data from the client
while (client.connected()) { // loop while the client's connected
if (client.available()) { // if there's bytes to read from the client,
char c = client.read(); // read a byte, then
Serial.write(c); // print it out the serial monitor
if (c == '\n') { // if the byte is a newline character
// if the current line is blank, you got two newline characters in a row.
// that's the end of the client HTTP request, so send a response:
if (currentLine.length() == 0) {
// HTTP headers always start with a response code (e.g. HTTP/1.1 200 OK)
// and a content-type so the client knows what's coming, then a blank line:
client.println("HTTP/1.1 200 OK");
client.println("Content-type:text/html");
client.println();
voltageReading = analogRead(A0);
//meta-refresh page as fast as possible
client.print("<HEAD>");
client.print("<meta http-equiv=\"refresh\" content=\"0\">");
client.print("<TITLE />Smart 3 Phase Relay TCNJ</title>");
client.print("</head>");
// the content of the HTTP response follows the header:
client.print("Voltage Reading: ");
client.print(voltageReading);
// The HTTP response ends with another blank line:
client.println();
// break out of the while loop:
break;
}
else { // if you got a newline, then clear currentLine:
currentLine = "";
}
}
else if (c != '\r') { // if you got anything else but a carriage return character,
currentLine += c; // add it to the end of the currentLine
}
// Check to see if the client request was "GET /H" or "GET /L":
if (currentLine.endsWith("GET /H")) {
digitalWrite(9, HIGH); // GET /H turns the LED on
}
if (currentLine.endsWith("GET /L")) {
digitalWrite(9, LOW); // GET /L turns the LED off
}
}
}
// close the connection:
client.stop();
Serial.println("client disonnected");
}
}
HTML is connectionless, that means you can't know if the data has changed. It is said that you can only "PULL", and ajax still pull, but it take xml/json data instead of all html.
But as far as i can understnd, you want a "PUSH" technology, with means the server send new data to clients when avaiable; that is possible using "WebSocket", but it need a server rewrite to support the protocol, or you can use a library
A websocket can be used like a normal socket (with some security restriction enforced by browser, principally you can only connecf to the same domain/ip of the HTML server), that means that once a connection is established, you can just print and read it like you do with a Serial!
Related
I am doing an exercise on web server and I am having problem in sending image from server to client, I have read my .jpg image file into binary form and save it in a char array named new_buffer with the header then sent to the web client (I use my browser as a web client) but the images seem to be corrupted and cannot be displayed in html web, I have tested the process of reading the image file everything is working fine,I tried converting the data from binary to .jpg file and it still works correctly. So where did I go wrong? Please help me, thanks!
Here is my code in processing and sending images :
if (strcmp(s_content_type.data(), ".jpg") == 0 || strcmp(s_content_type.data(), ".jpeg") == 0)
{
if (!file.is_open())
{
error(buffer, size_buff);//if file is not open, response 404 error
}
else
{
ifstream f(s3, ios::in | ios::binary | ios::ate);
streampos size = f.tellg();
char* image = new char[size];
f.seekg(0, ios::beg);
f.read(image, size);
f.close();
char* new_buffer = new char[(int)size+1000];
int t = 0;
strcpy(header, "HTTP/1.1 200 OK\r\nContent-Type: image/jpeg\r\nConnection : close\r\n\r\n");
printf("############################################\n");
printf("Reponse data: \n%s\n", image);
//
strcpy(new_buffer,header);
strcat(new_buffer, image);
iSendResult = send(ClientSocket,new_buffer, (int)size+1000, 0);
if (iSendResult == SOCKET_ERROR) {
printf("send failed with error: %d\n", WSAGetLastError());
closesocket(ClientSocket);
}
printf("==>Bytes sent: %d\n", iSendResult);
closesocket(ClientSocket);
}
}
And this is the error image in web client:
error
I have tried many different ways to read the image but all give the same result,so I think the error is in read mode? Is there any other ways to read image and send it to web client without corrupted?Thanks.
I'm using HttpClient from WP8 and do a Post request. I know the call may take long time as I'm actually simulating slow network scenarios. Therefore I set the HttpClient.Timeout accordingly to 5 minutes.
However, I get a Timeout at around 60s. I believe the Timeout is not working.
I believe there is an issue with this for WP as stated in this question:
HttpClient Portable returns 404 notfound on WP8.
They use a workaround but that does not applies to my scenario. I do actually want to wait for long time.
My questions:
1) Is it a bug/issue of HttpClient for WP8 or I'm not setting it properly?
2) Do you think of a workaround still using HttpClient?
I've read that maybe HttpWebRequest is an option. However, I believe HttpClient should be ideal for this 'simple' scenario.
My code is simple:
private static async Task<HttpResponseMessage> PostAsync(Uri serverUri, HttpContent httpContent)
{
var client = new HttpClient();
client.Timeout = TimeSpan.FromMinutes(5);
return await client.PostAsync(serverUri, httpContent).ConfigureAwait(false);
}
The server receives the request and while is processing it, the client aborts.
UPDATE: The HttpResponseMessage returned by HttpClient.PostAsyn is this "{StatusCode: 404, ReasonPhrase: '', Version: 0.0, Content: System.Net.Http.StreamContent, Headers: { Content-Length: 0 }}". As I said, the server is found and is receiving the data and processing it.
After some search and some tests I've came to the conclusion that the problem is Windows Phone itself and that it has a 60 seconds timeout (irrespective of the HttpClient) and that cannot be changed to my knowledge. See http://social.msdn.microsoft.com/Forums/en-US/faf00a04-8a2e-4a64-b1c1-74c52cf685d3/httpwebrequest-60-seconds-timeout.
As I'm programming the server as well, I will try the advice by Darin Rousseau in the link provided above, specifically to send an OK and then do some more processing.
UPDATE: The problem seems to be the Windows Phone emulator as stated here:
http://social.msdn.microsoft.com/forums/wpapps/en-us/6c114ae9-4dc1-4e1f-afb2-a6b9004bf0c6/httpclient-doesnt-work-on-windows-phone?forum=wpdevelop. In my experience the tcp connection times-out if it doesn't hear anything for 60s.
Therefore my solution is to use the Http header characters as a way of keep alive. The first line Http header response always starts with HTTP/1.0. So I send the characters one by one with a delay <60s between them. Of course, if the response gets ready, everything that is left is sent right away. This buys some time, for instance if using a delay of 50s per 9 character we get about 450s.
This is a project for my degree so I wouldn't recommend it for production.
By the way, I also tried with other characters instead the sub string of the header, for instance space character, but that results in a http protocol violation.
This is the main part of the code:
private const string Header1 = #"HTTP/1.0 ";
private int _keepAliveCounter = 0;
private readonly object _sendingLock = new object();
private bool _keepAliveDone = true;
private void StartKeepAlive()
{
Task.Run(() => KeepAlive());
}
/// <summary>
/// Keeps the connection alive sending the first characters of the http response with an interval.
/// This is a hack for Windows Phone 8 that need reponses within 60s interval.
/// </summary>
private void KeepAlive()
{
try
{
_keepAliveDone = false;
_keepAliveCounter = 0;
while (!_keepAliveDone && _keepAliveCounter < Header1.Length)
{
Task.Delay(TimeSpan.FromSeconds(50)).Wait();
lock (_sendingLock)
{
if (!_keepAliveDone)
{
var sw = new StreamWriter(OutputStream);
sw.Write(Header1[_keepAliveCounter]);
Console.Out.WriteLine("Wrote keep alive char '{0}'", Header1[_keepAliveCounter]);
_keepAliveCounter++;
sw.Flush();
}
}
}
_keepAliveCounter = 0;
_keepAliveDone = true;
}
catch (Exception e)
{
// log the exception
Console.Out.WriteLine("Error while sending keepalive: " + e.Message);
}
}
Then, the actual processing happens in a different thread.
Comments and critics are appreciated.
It is possible that you are hitting the timeout of the network stream. You can change this by doing,
var handler = new WebRequestHandler();
handler.ReadWriteTimeout= 5 * 60 * 1000;
var client = new HttpClient(handler);
client.Timeout = TimeSpan.FromMinutes(5);
return await client.PostAsync(serverUri, httpContent).ConfigureAwait(false);
The default on the desktop OS is already 5mins. However, it is possible that on Windows Phone it has been reduced by default.
I'm using a web server to control devices in the house with a microcontroller running .netMF (netduino plus 2). The code below writes a simple html page to a device that connects to the microcontroller over the internet.
while (true)
{
Socket clientSocket = listenerSocket.Accept();
bool dataReady = clientSocket.Poll(5000000, SelectMode.SelectRead);
if (dataReady && clientSocket.Available > 0)
{
byte[] buffer = new byte[clientSocket.Available];
int bytesRead = clientSocket.Receive(buffer);
string request =
new string(System.Text.Encoding.UTF8.GetChars(buffer));
if (request.IndexOf("ON") >= 0)
{
outD7.Write(true);
}
else if (request.IndexOf("OFF") >= 0)
{
outD7.Write(false);
}
string statusText = "Light is " + (outD7.Read() ? "ON" : "OFF") + ".";
string response = WebPage.startHTML(statusText, ip);
clientSocket.Send(System.Text.Encoding.UTF8.GetBytes(response));
}
clientSocket.Close();
}
public static string startHTML(string ledStatus, string ip)
{
string code = "<html><head><title>Netduino Home Automation</title></head><body> <div class=\"status\"><p>" + ledStatus + " </p></div> <div class=\"switch\"><p>On</p><p>Off</p></div></body></html>";
return code;
}
This works great, so I wrote a full jquery mobile website to use instead of the simple html. This website is stored on the SD card of the device and using the code below, should write the full website in place of the simple html above.
However, my problem is the netduino only writes the single HTML page to the browser, with none of the JS/CSS style files that are referenced in the HTML. How can I make sure the browser reads all of these files, as a full website?
The code I wrote to read the website from the SD is:
private static string getWebsite()
{
try
{
using (StreamReader reader = new StreamReader(#"\SD\index.html"))
{
text = reader.ReadToEnd();
}
}
catch (Exception e)
{
throw new Exception("Failed to read " + e.Message);
}
return text;
}
I replaced string code = " etc bit with
string code = getWebsite();
How can I make sure the browser reads all of these files, as a full website?
Isn't it already? Use an HTTP debugging tool like Fiddler. As I read from your code, your listenerSocket is supposed to listen on port 80. Your browser will first retrieve the results of the getWebsite call and parse the HTML.
Then it'll fire more requests, as it finds CSS and JS references in your HTML (none shown). These requests will, as far as we can see from your code, again receive the results of the getWebsite call.
You'll need to parse the incoming HTTP request to see what resource is being requested. It'll become a lot easier if the .NET implementation you run supports the HttpListener class (and it seems to).
I've implemented a client/server that communicate using a TCP socket. The data that I'm writing to the socket is stringified JSON. Initially everything works as expected, however, as I increase the rate of writes I eventually encounter JSON parse errors where the beginning on the client receives the beginning of the new write on the end of the old one.
Here is the server code:
var data = {};
data.type = 'req';
data.id = 1;
data.size = 2;
var string = JSON.stringify(data);
client.write(string, callback());
Here is how I am receiving this code on the client server:
client.on('data', function(req) {
var data = req.toString();
try {
json = JSON.parse(data);
} catch (err) {
console.log("JSON parse error:" + err);
}
});
The error that I'm receiving as the rate increases is:
SyntaxError: Unexpected token {
Which appears to be the beginning of the next request being tagged onto the end of the current one.
I've tried using ; as a delimiter on the end of each JSON request and then using:
var data = req.toString().substring(0,req.toString().indexOf(';'));
However this approach, instead of resulting in JSON parse errors seems to result in completely missing some requests on the client side as I increase the rate of writes over 300 per second.
Are there any best practices or more efficient ways to delimit incoming requests via TCP sockets?
Thanks!
Thanks everyone for the explanations, they helped me to better understand the way in which data is sent and received via TCP sockets. Below is a brief overview of the code that I used in the end:
var chunk = "";
client.on('data', function(data) {
chunk += data.toString(); // Add string on the end of the variable 'chunk'
d_index = chunk.indexOf(';'); // Find the delimiter
// While loop to keep going until no delimiter can be found
while (d_index > -1) {
try {
string = chunk.substring(0,d_index); // Create string up until the delimiter
json = JSON.parse(string); // Parse the current string
process(json); // Function that does something with the current chunk of valid json.
}
chunk = chunk.substring(d_index+1); // Cuts off the processed chunk
d_index = chunk.indexOf(';'); // Find the new delimiter
}
});
Comments welcome...
You're on the right track with using a delimiter. However, you can't just extract the stuff before the delimiter, process it, and then discard what came after it. You have to buffer up whatever you got after the delimiter and then concatenate what comes next to it. This means that you could end up with any number (including 0) of JSON "chunks" after a given data event.
Basically you keep a buffer, which you initialize to "". On each data event you concatenate whatever you receive to the end of the buffer and then split it the buffer on the delimiter. The result will be one or more entries, but the last one might not be complete so you need to test the buffer to make sure it ends with your delimiter. If not, you pop the last result and set your buffer to it. You then process whatever results remain (which might not be any).
Be aware that TCP does not make any guarantees about where it divides the chunks of data you recieve. All it guarantees is that all the bytes you send will be received in order, unless the connection fails entirely.
I believe Node data events come in whenever the socket says it has data for you. Technically you could get separate data events for each byte in your JSON data and it would still be within the limits of what the OS is allowed to do. Nobody does that, but your code needs to be written as if it could suddenly start happening at any time to be robust. It's up to you to combine data events and then re-split the data stream along boundaries that make sense to you.
To do that, you need to buffer any data that isn't "complete", including data appended to the end of a chunk of "complete" data. If you're using a delimiter, never throw away any data after the delimiter -- always keep it around as a prefix until you see either more data and eventually either another delimiter or the end event.
Another common choice is to prefix all data with a length field. Say you use a fixed 64-bit binary value. Then you always wait for 8 bytes, plus however many more the value in those bytes indicate, to arrive. Say you had a chunk of ten bytes of data incoming. You might get 2 bytes in one event, then 5, then 4 -- at which point you can parse the length and know you need 7 more, since the last 3 bytes of the third chunk were payload. If the next event actually contains 25 bytes, you'd take the first 7 along with the 3 from before and parse that, and look for another length field in bytes 8-16.
That's a contrived example, but be aware that at low traffic rates, the network layer will generally send your data out in whatever chunks you give it, so this sort of thing only really starts to show up as you increase the load. Once the OS starts building packets from multiple writes at once, it will start splitting on a granularity that is convenient for the network and not for you, and you have to deal with that.
Following this response :
var chunk = "";
client.on('data', function(data) {
chunk += data.toString(); // Add string on the end of the variable 'chunk'
d_index = chunk.indexOf(';'); // Find the delimiter
// While loop to keep going until no delimiter can be found
while (d_index > -1) {
try {
string = chunk.substring(0,d_index); // Create string up until the delimiter
json = JSON.parse(string); // Parse the current string
process(json); // Function that does something with the current chunk of valid json.
}
chunk = chunk.substring(d_index+1); // Cuts off the processed chunk
d_index = chunk.indexOf(';'); // Find the new delimiter
}
});
I get a problem with the delimiter because ; was part of my sent data.
It is possible to use this update in order to implement a custom delimiter :
var chunk = "";
const DELIMITER = (';;;');
client.on('data', function(data) {
chunk += data.toString(); // Add string on the end of the variable 'chunk'
d_index = chunk.indexOf(DELIMITER); // Find the delimiter
// While loop to keep going until no delimiter can be found
while (d_index > -1) {
try {
string = chunk.substring(0,d_index); // Create string up until the delimiter
json = JSON.parse(string); // Parse the current string
process(json); // Function that does something with the current chunk of valid json.
}
chunk = chunk.substring(d_index+DELIMITER.length); // Cuts off the processed chunk
d_index = chunk.indexOf(DELIMITER); // Find the new delimiter
}
});
I know this question is old but I have an answer for the people still looking at this.
As said in the answers above, the data event will be fired with a nodejs Buffer containing the data received.
res.on('data', function(chunk) {
//chunk contains the data
})
This next part doesnt seem to be commonly known. The end event is fired when all data is consumed. The close event is fired when the client disconnects
res.on('end', function() {
//the response body has been consumed
})
The full code to get the entire body is below
var body = Buffer.from('');
res.on('data', function(chunk) {
if (chunk && chunk.byteLength > 0) {
body = Buffer.concat([body, chunk]);
}
})
res.on('end', function() {
var data = JSON.parse(body.toString());
//data contains the response json
})
End event is fired when the data is all consumed: source
close event is fired when the request is closed: source
Try with end event and no data
var data = '';
client.on('data', function (chunk) {
data += chunk.toString();
});
client.on('end', function () {
data = JSON.parse(data); // use try catch, because if a man send you other for fun, you're server can crash.
});
Hope help you.
I'm attempting to create a PDF file from an HTML file. After looking around a little I've found: wkhtmltopdf to be perfect. I need to call this .exe from the ASP.NET server. I've attempted:
Process p = new Process();
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = HttpContext.Current.Server.MapPath("wkhtmltopdf.exe");
p.StartInfo.Arguments = "TestPDF.htm TestPDF.pdf";
p.Start();
p.WaitForExit();
With no success of any files being created on the server. Can anyone give me a pointer in the right direction? I put the wkhtmltopdf.exe file at the top level directory of the site. Is there anywhere else it should be held?
Edit: If anyone has better solutions to dynamically create pdf files from html, please let me know.
Update:
My answer below, creates the pdf file on the disk. I then streamed that file to the users browser as a download. Consider using something like Hath's answer below to get wkhtml2pdf to output to a stream instead and then send that directly to the user - that will bypass lots of issues with file permissions etc.
My original answer:
Make sure you've specified an output path for the PDF that is writeable by the ASP.NET process of IIS running on your server (usually NETWORK_SERVICE I think).
Mine looks like this (and it works):
/// <summary>
/// Convert Html page at a given URL to a PDF file using open-source tool wkhtml2pdf
/// </summary>
/// <param name="Url"></param>
/// <param name="outputFilename"></param>
/// <returns></returns>
public static bool HtmlToPdf(string Url, string outputFilename)
{
// assemble destination PDF file name
string filename = ConfigurationManager.AppSettings["ExportFilePath"] + "\\" + outputFilename + ".pdf";
// get proj no for header
Project project = new Project(int.Parse(outputFilename));
var p = new System.Diagnostics.Process();
p.StartInfo.FileName = ConfigurationManager.AppSettings["HtmlToPdfExePath"];
string switches = "--print-media-type ";
switches += "--margin-top 4mm --margin-bottom 4mm --margin-right 0mm --margin-left 0mm ";
switches += "--page-size A4 ";
switches += "--no-background ";
switches += "--redirect-delay 100";
p.StartInfo.Arguments = switches + " " + Url + " " + filename;
p.StartInfo.UseShellExecute = false; // needs to be false in order to redirect output
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true; // redirect all 3, as it should be all 3 or none
p.StartInfo.WorkingDirectory = StripFilenameFromFullPath(p.StartInfo.FileName);
p.Start();
// read the output here...
string output = p.StandardOutput.ReadToEnd();
// ...then wait n milliseconds for exit (as after exit, it can't read the output)
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
// if 0 or 2, it worked (not sure about other values, I want a better way to confirm this)
return (returnCode == 0 || returnCode == 2);
}
I had the same problem when i tried using msmq with a windows service but it was very slow for some reason. (the process part).
This is what finally worked:
private void DoDownload()
{
var url = Request.Url.GetLeftPart(UriPartial.Authority) + "/CPCDownload.aspx?IsPDF=False?UserID=" + this.CurrentUser.UserID.ToString();
var file = WKHtmlToPdf(url);
if (file != null)
{
Response.ContentType = "Application/pdf";
Response.BinaryWrite(file);
Response.End();
}
}
public byte[] WKHtmlToPdf(string url)
{
var fileName = " - ";
var wkhtmlDir = "C:\\Program Files\\wkhtmltopdf\\";
var wkhtml = "C:\\Program Files\\wkhtmltopdf\\wkhtmltopdf.exe";
var p = new Process();
p.StartInfo.CreateNoWindow = true;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = wkhtml;
p.StartInfo.WorkingDirectory = wkhtmlDir;
string switches = "";
switches += "--print-media-type ";
switches += "--margin-top 10mm --margin-bottom 10mm --margin-right 10mm --margin-left 10mm ";
switches += "--page-size Letter ";
p.StartInfo.Arguments = switches + " " + url + " " + fileName;
p.Start();
//read output
byte[] buffer = new byte[32768];
byte[] file;
using(var ms = new MemoryStream())
{
while(true)
{
int read = p.StandardOutput.BaseStream.Read(buffer, 0,buffer.Length);
if(read <=0)
{
break;
}
ms.Write(buffer, 0, read);
}
file = ms.ToArray();
}
// wait or exit
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
return returnCode == 0 ? file : null;
}
Thanks Graham Ambrose and everyone else.
OK, so this is an old question, but an excellent one. And since I did not find a good answer, I made my own :) Also, I've posted this super simple project to GitHub.
Here is some sample code:
var pdfData = HtmlToXConverter.ConvertToPdf("<h1>SOO COOL!</h1>");
Here are some key points:
No P/Invoke
No creating of a new process
No file-system (all in RAM)
Native .NET DLL with intellisense, etc.
Ability to generate a PDF or PNG (HtmlToXConverter.ConvertToPng)
Check out the C# wrapper library (using P/Invoke) for the wkhtmltopdf library: https://github.com/pruiz/WkHtmlToXSharp
There are many reason why this is generally a bad idea. How are you going to control the executables that get spawned off but end up living on in memory if there is a crash? What about denial-of-service attacks, or if something malicious gets into TestPDF.htm?
My understanding is that the ASP.NET user account will not have the rights to logon locally. It also needs to have the correct file permissions to access the executable and to write to the file system. You need to edit the local security policy and let the ASP.NET user account (maybe ASPNET) logon locally (it may be in the deny list by default). Then you need to edit the permissions on the NTFS filesystem for the other files. If you are in a shared hosting environment it may be impossible to apply the configuration you need.
The best way to use an external executable like this is to queue jobs from the ASP.NET code and have some sort of service monitor the queue. If you do this you will protect yourself from all sorts of bad things happening. The maintenance issues with changing the user account are not worth the effort in my opinion, and whilst setting up a service or scheduled job is a pain, its just a better design. The ASP.NET page should poll a result queue for the output and you can present the user with a wait page. This is acceptable in most cases.
You can tell wkhtmltopdf to send it's output to sout by specifying "-" as the output file.
You can then read the output from the process into the response stream and avoid the permissions issues with writing to the file system.
My take on this with 2018 stuff.
I am using async. I am streaming to and from wkhtmltopdf. I created a new StreamWriter because wkhtmltopdf is expecting utf-8 by default but it is set to something else when the process starts.
I didn't include a lot of arguments since those varies from user to user. You can add what you need using additionalArgs.
I removed p.WaitForExit(...) since I wasn't handling if it fails and it would hang anyway on await tStandardOutput. If timeout is needed, then you would have to call Wait(...) on the different tasks with a cancellationtoken or timeout and handle accordingly.
public async Task<byte[]> GeneratePdf(string html, string additionalArgs)
{
ProcessStartInfo psi = new ProcessStartInfo
{
FileName = #"C:\Program Files\wkhtmltopdf\wkhtmltopdf.exe",
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
Arguments = "-q -n " + additionalArgs + " - -";
};
using (var p = Process.Start(psi))
using (var pdfSream = new MemoryStream())
using (var utf8Writer = new StreamWriter(p.StandardInput.BaseStream,
Encoding.UTF8))
{
await utf8Writer.WriteAsync(html);
utf8Writer.Close();
var tStdOut = p.StandardOutput.BaseStream.CopyToAsync(pdfSream);
var tStdError = p.StandardError.ReadToEndAsync();
await tStandardOutput;
string errors = await tStandardError;
if (!string.IsNullOrEmpty(errors)) { /* deal/log with errors */ }
return pdfSream.ToArray();
}
}
Things I haven't included in there but could be useful if you have images, css or other stuff that wkhtmltopdf will have to load when rendering the html page:
you can pass the authentication cookie using --cookie
in the header of the html page, you can set the base tag with href pointing to the server and wkhtmltopdf will use that if need be
Thanks for the question / answer / all the comments above. I came upon this when I was writing my own C# wrapper for WKHTMLtoPDF and it answered a couple of the problems I had. I ended up writing about this in a blog post - which also contains my wrapper (you'll no doubt see the "inspiration" from the entries above seeping into my code...)
Making PDFs from HTML in C# using WKHTMLtoPDF
Thanks again guys!
The ASP .Net process probably doesn't have write access to the directory.
Try telling it to write to %TEMP%, and see if it works.
Also, make your ASP .Net page echo the process's stdout and stderr, and check for error messages.
Generally return code =0 is coming if the pdf file is created properly and correctly.If it's not created then the value is in -ve range.
using System;
using System.Diagnostics;
using System.Web;
public partial class pdftest : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
private void fn_test()
{
try
{
string url = HttpContext.Current.Request.Url.AbsoluteUri;
Response.Write(url);
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName =
#"C:\PROGRA~1\WKHTML~1\wkhtmltopdf.exe";//"wkhtmltopdf.exe";
startInfo.Arguments = url + #" C:\test"
+ Guid.NewGuid().ToString() + ".pdf";
Process.Start(startInfo);
}
catch (Exception ex)
{
string xx = ex.Message.ToString();
Response.Write("<br>" + xx);
}
}
protected void btn_test_Click(object sender, EventArgs e)
{
fn_test();
}
}