I need to generate pdf from html dynamically using asp.net. HTML is stored in database. HTML has tables and css, upto 10 pages. I have tried iTextSharp by directly passing html, it produces pdf which is not opening. Destination pdf.codeplex.com has no documentation, it produces PDF with styles from parent page.
Any other solution will be helpful.
I've tried many HTML to PDF solutions including iTextSharp, wkhtmltopdf and ABCpdf (paid)
I'm currently settled on PhantomJS a headless, open-source, WebKit-based browser. It is scriptable with a javascript API which is reasonably well documented.
The only disadvantage I found was that attempting to use stdin to pass HTML into the process was unsuccessful because the REPL still has some bugs. I also found that using stdout seemed to be a lot slower than simply allowing the process to write to disk.
The code below avoids stdin and stdout by creating the javascript input as a temp file, executing PhantomJS, copying the output file to a MemoryStream and cleaning up the temporary files at the end.
using System.IO;
using System.Drawing;
using System.Diagnostics;
public Stream HTMLtoPDF (string html, Size pageSize) {
string path = "C:\\dev\\";
string inputFileName = "tmp.js";
string outputFileName = "tmp.pdf";
StringBuilder input = new StringBuilder();
input.Append("var page = require('webpage').create();");
input.Append(String.Format("page.viewportSize = {{ width: {0}, height: {1} }};", pageSize.Width, pageSize.Height));
input.Append("page.paperSize = { format: 'Letter', orientation: 'portrait', margin: '1cm' };");
input.Append("page.onLoadFinished = function() {");
input.Append(String.Format("page.render('{0}');", outputFileName));
input.Append("phantom.exit();");
input.Append("};");
// html is being passed into a string literal so make sure any double quotes are properly escaped
input.Append("page.content = \"" + html.Replace("\"", "\\\"") + "\";");
File.WriteAllText(path + inputFileName, input.ToString());
Process p;
ProcessStartInfo psi = new ProcessStartInfo();
psi.FileName = path + "phantomjs.exe";
psi.Arguments = inputFileName;
psi.WorkingDirectory = Path.GetDirectoryName(psi.FileName);
psi.UseShellExecute = false;
psi.CreateNoWindow = true;
p = Process.Start(psi);
p.WaitForExit(10000);
Stream strOut = new MemoryStream();
Stream fileStream = File.OpenRead(path + outputFileName);
fileStream.CopyTo(strOut);
fileStream.Close();
strOut.Position = 0;
File.Delete(path + inputFileName);
File.Delete(path + outputFileName);
return strOut;
}
Related
I have a need to embed MS-Office documents (Excel, Word) into AutoCAD using Design Automation. Searching around the web, it seems that this is not possible because the MS-Office applications, which would act as an OLE Client, would need to be running on the Forge Server. Could someone confirm that this is the case?
If I am correct in my above statement, my next best alternative would be to embed .EMF files created from each page of the document I want to embed; alternatively using raster images would also be acceptable. Creating the .EMF or raster files is not a problem. I just can't find a solution for embedding the file that does not involve copying them to the clipboard and using the PASTECLIP command. This approach has worked for me in the AutoCAD application using a C# AutoCAD.NET plugin, an OLE2Frame object is created, but it fails in accoreconsole (because PASTECLIP uses a UI class which is not available). This leads me to think that the same would occur while running the bundle in Design Automation.
The best I have been able achieve so far is to write a raster image files to the working directory and linking the raster images to the AutoCAD document using RasterImageDef and RasterImage (code below). Is this the only way I can do this? Can I do something similar using an EMF image, which is vector based, instead of a raster image? Or is there a way to actually embed an EMF (preferred) or raster image instead of just linking the images?
The code below fails if I use .EMF files, because RasterImageDef and RasterImage do not support the the EMF file; the EMF file being a vector format, not a raster format?
[CommandMethod("TEST")]
public void Test()
{
Document doc = Application.DocumentManager.MdiActiveDocument;
Database db = doc.Database;
Editor ed = doc.Editor;
// Get the file name of the image using the editor to prompt for the file name
// Create the prompt
PromptOpenFileOptions options = new PromptOpenFileOptions("Enter Sequence file path");
options.PreferCommandLine = true;
// Get the file name, use no quotes
PromptFileNameResult result = null;
try { result = ed.GetFileNameForOpen(options); }
catch (System.Exception ex)
{
DisplayLogMessage($"Could not get sequence file location. Exception: {ex.Message}.", ed);
return;
}
// Get the rtf filename from the results
string filename = result.StringResult;
DisplayLogMessage($"Got sequence filename: {filename}", ed);
// Load the Sequence.rtf document
Aspose.Words.Document seq;
using (FileStream st = new FileStream(filename, FileMode.Open))
{
seq = new Aspose.Words.Document(st);
st.Close();
}
DisplayLogMessage($"Aspose.Words Loaded: {filename}", ed);
Transaction trans = db.TransactionManager.StartTransaction();
// Get or create the image dictionary
ObjectId imageDictId = RasterImageDef.GetImageDictionary(db);
if (imageDictId != null)
imageDictId = RasterImageDef.CreateImageDictionary(db);
// Open the Image Dictonary
DBDictionary imageDict = (DBDictionary)trans.GetObject(imageDictId, OpenMode.ForRead);
double x = 0.0;
double y = 0.0;
try
{
// For each page in the Sequence.
for (int i = 0; i < seq.PageCount; i++)
{
DisplayLogMessage($"Starting page {i + 1}", ed);
// extract the page.
Aspose.Words.Document newSeq = seq.ExtractPages(i, 1);
Aspose.Words.Saving.ImageSaveOptions imgOptions = new Aspose.Words.Saving.ImageSaveOptions(Aspose.Words.SaveFormat.Emf);
imgOptions.Resolution = 300;
DisplayLogMessage($"Extracted page {i + 1}", ed);
string dictName = Guid.NewGuid().ToString();
filename = Path.Combine(Path.GetDirectoryName(doc.Name), dictName + ".Emf");
// Save the image
SaveOutputParameters sp = newSeq.Save(filename, imgOptions);
DisplayLogMessage($"Saved {dictName}.Emf", ed);
RasterImageDef imageDef = null;
ObjectId imageDefId;
// see if my guid is in there
if (imageDict.Contains(dictName))
imageDefId = (ObjectId)imageDict.GetAt(dictName);
else
{
// Create an image def
imageDef = new RasterImageDef();
imageDef.SourceFileName = $"./{dictName}.Emf";
// load the image
imageDef.Load();
imageDict.UpgradeOpen();
imageDefId = imageDict.SetAt(dictName, imageDef);
trans.AddNewlyCreatedDBObject(imageDef, true);
}
// create raster image to reference the definition
RasterImage image = new RasterImage();
image.ImageDefId = imageDefId;
// Prepare orientation
Vector3d uCorner = new Vector3d(8.5, 0, 0);
Vector3d vOnPlane = new Vector3d(0, 11, 0);
Point3d ptInsert = new Point3d(x, y, 0);
x += 8.5;
CoordinateSystem3d coordinateSystem = new CoordinateSystem3d(ptInsert, uCorner, vOnPlane);
image.Orientation = coordinateSystem;
// some other stuff
image.ImageTransparency = true;
image.ShowImage = true;
// Add the image to ModelSpace
BlockTable bt = (BlockTable)trans.GetObject(db.BlockTableId, OpenMode.ForRead);
BlockTableRecord btr = (BlockTableRecord)trans.GetObject(bt[BlockTableRecord.ModelSpace], OpenMode.ForWrite);
btr.AppendEntity(image);
trans.AddNewlyCreatedDBObject(image, true);
// Create a reactor between the RasterImage
// and the RasterImageDef to avoid the "Unreferenced"
// warning the XRef palette
RasterImage.EnableReactors(true); // in the original was true
image.AssociateRasterDef(imageDef);
}
trans.Commit();
}
catch (System.Exception ex)
{
DisplayLogMessage("ERROR: " + ex.Message,ed);
trans.Abort();
}
}
Raster images are always linked. There's no way to embed them. The only way to embed an image is to use AcDbOle2Frame (C++) or Autodesk.AutoCAD.DatabaseServices.Ole2Frame (C#). In theory, it is possible to create these objects without the "OLE server" being present but I haven't tried so I don't know if enough APIs are exposed to make it happen.
You should try it and see how far you can get.
Albert
There is way to embed raster image, it is not straightforeward, you need to use C++\ObjectARX API, please refer this https://github.com/MadhukarMoogala/EmbedRasterImage/tree/EmbedRasterImageUsingDBX
Although I find similar questions as mine, the answers to those questions don't answer my question. I'm trying to convert XML and XSLT into HTML so it can get converted to a PDF.
I have an HTML page (249 lines) that gets converted to XSLT:
`string filePath = hostURL + "path/to/somefile.htm";`
`xslt = client.DownloadData(filePath);`
The corresponding XML gets generated with XmlDocument xmlDoc = new XmlDocument();
then XmlElement nodes are added and MemoryStream xmlStream is returned where xslt and xmlStream are passed to
`string html = Helper.GenerateHTML(xmlStream, xslt);`
In Helper.GenerateHTML(this MemoryStream xmlStream, byte[] xsltByte) this occurs:
`XmlReaderSettings settings = new XmlReaderSettings();
settings.XmlResolver = null;
settings.DtdProcessing = DtdProcessing.Parse;`
`using (var xsltMemoryStream = new MemoryStream(xsltByte))
using (var xmlReader = XmlReader.Create(xsltMemoryStream, settings))
using (var xmlMemoryStream = new MemoryStream())`
`using (var xmlTextWriter = new XmlTextWriter(xmlMemoryStream, null))
{
var xct = new XslCompiledTransform();
xct.Load(xmlReader);
...
}`
but I get the error at xct.Load(xmlReader); and the Create PDF button click that invoked this code does not result in a PDF being created nor displayed.
Neither the XSL nor XML have symbol-related ("&" vs. "&", etc.) issues. For what it's worth, there is another process that extracts XSLT from the database (instead of an HTML file), generates its corresponding XML with XMLGenerator xgen = new XMLGenerator() then calls Helper.GenerateHTML(xmlStream, xslt) but does not choke at the xct.Load(xmlReader); line of code and a resulting PDF is displayed.
Why does the parsing error occur?
I'm using a web server to control devices in the house with a microcontroller running .netMF (netduino plus 2). The code below writes a simple html page to a device that connects to the microcontroller over the internet.
while (true)
{
Socket clientSocket = listenerSocket.Accept();
bool dataReady = clientSocket.Poll(5000000, SelectMode.SelectRead);
if (dataReady && clientSocket.Available > 0)
{
byte[] buffer = new byte[clientSocket.Available];
int bytesRead = clientSocket.Receive(buffer);
string request =
new string(System.Text.Encoding.UTF8.GetChars(buffer));
if (request.IndexOf("ON") >= 0)
{
outD7.Write(true);
}
else if (request.IndexOf("OFF") >= 0)
{
outD7.Write(false);
}
string statusText = "Light is " + (outD7.Read() ? "ON" : "OFF") + ".";
string response = WebPage.startHTML(statusText, ip);
clientSocket.Send(System.Text.Encoding.UTF8.GetBytes(response));
}
clientSocket.Close();
}
public static string startHTML(string ledStatus, string ip)
{
string code = "<html><head><title>Netduino Home Automation</title></head><body> <div class=\"status\"><p>" + ledStatus + " </p></div> <div class=\"switch\"><p>On</p><p>Off</p></div></body></html>";
return code;
}
This works great, so I wrote a full jquery mobile website to use instead of the simple html. This website is stored on the SD card of the device and using the code below, should write the full website in place of the simple html above.
However, my problem is the netduino only writes the single HTML page to the browser, with none of the JS/CSS style files that are referenced in the HTML. How can I make sure the browser reads all of these files, as a full website?
The code I wrote to read the website from the SD is:
private static string getWebsite()
{
try
{
using (StreamReader reader = new StreamReader(#"\SD\index.html"))
{
text = reader.ReadToEnd();
}
}
catch (Exception e)
{
throw new Exception("Failed to read " + e.Message);
}
return text;
}
I replaced string code = " etc bit with
string code = getWebsite();
How can I make sure the browser reads all of these files, as a full website?
Isn't it already? Use an HTTP debugging tool like Fiddler. As I read from your code, your listenerSocket is supposed to listen on port 80. Your browser will first retrieve the results of the getWebsite call and parse the HTML.
Then it'll fire more requests, as it finds CSS and JS references in your HTML (none shown). These requests will, as far as we can see from your code, again receive the results of the getWebsite call.
You'll need to parse the incoming HTTP request to see what resource is being requested. It'll become a lot easier if the .NET implementation you run supports the HttpListener class (and it seems to).
I have a process where the html is stored in database with image links. the images are also stored in db as well. I've created a controller action which reads the image from database. the path I'm generating is something like /File/Image?path=Root/test.jpg.
this image path is embedded in html in img tag like <img alt="logo" src="/File/Image?path=Root/001.jpg" />
I'm trying to use itextsharp to read the html from the database and create a pdf document
string _html = GenerateDocumentHelpers.CommissioningSheet(fleetId);
string _html = GenerateDocumentHelpers.CommissioningSheet(fleetId);
Document _document = new Document(PageSize.A4, 80, 50, 30, 65);
MemoryStream _memStream = new MemoryStream();
PdfWriter _writer = PdfWriter.GetInstance(_document, _memStream);
StringReader _reader = new StringReader(_html);
HTMLWorker _worker = new HTMLWorker(_document);
_document.Open();
_worker.Parse(_reader);
_document.Close();
Response.Clear();
Response.AddHeader("content-disposition", "attachment; filename=Commissioning.pdf");
Response.ContentType = "application/pdf";
Response.Buffer = true;
Response.OutputStream.Write(_memStream.GetBuffer(), 0, _memStream.GetBuffer().Length);
Response.OutputStream.Flush();
Response.End();
return new FileStreamResult(Response.OutputStream, "application/pdf");
This code gives me an illegal character error. this comes from the image tag, it is not recognizing ? and = characters, is there a way I can render this html with img tag so that when I create a pdf it renders the html and image from the database and creates a pdf or if itextsharp can't do it, can you provide me with any other third party open source tools that can accomplish this task?
If the image source isn't a fully qualified URL including protocol then iTextSharp assumes that it is a file-based URL. The solution is to just convert all image links to absolute in the form http://YOUR_DOMAIN/File/Image?path=Root/001.jpg.
You can also set a global property on the parser that works pretty much the same as the HTML <BASE> tag:
//Create a provider collection to set various processing properties
System.Collections.Generic.Dictionary<string, object> providers = new System.Collections.Generic.Dictionary<string, object>();
//Set the image base. This will be prepended to the SRC so watch your forward slashes
providers.Add(HTMLWorker.IMG_BASEURL, "http://YOUR_DOMAIN");
//Bind the providers to the worker
worker.SetProviders(providers);
worker.Parse(reader);
Below is a full working C# 2010 WinForms app targeting iTextSharp 5.1.2.0 that shows how to use a relative image and set its base using the global provider. Everything is pretty much the same as your code, although I through in a bunch of using statements to ensure proper cleanup. Make sure to watch the leading and trailing forward slashes on everything, the base URL gets prepended directly only the SRC attribute and you might end up with double-slashes if its not done correctly. I'm hard-balling a domain in here but you should be able to easily use the System.Web.HttpContext.Current.Request object.
using System;
using System.IO;
using System.Windows.Forms;
using iTextSharp.text;
using iTextSharp.text.html.simpleparser;
using iTextSharp.text.pdf;
namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
string html = #"<img src=""/images/home_mississippi.jpg"" />";
string outputFile = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "HtmlTest.pdf");
using (FileStream fs = new FileStream(outputFile, FileMode.Create, FileAccess.Write, FileShare.None)) {
using (Document doc = new Document(PageSize.TABLOID)) {
using (PdfWriter writer = PdfWriter.GetInstance(doc, fs)) {
doc.Open();
using (StringReader reader = new StringReader(html)) {
using (HTMLWorker worker = new HTMLWorker(doc)) {
//Create a provider collection to set various processing properties
System.Collections.Generic.Dictionary<string, object> providers = new System.Collections.Generic.Dictionary<string, object>();
//Set the image base. This will be prepended to the SRC so watch your forward slashes
providers.Add(HTMLWorker.IMG_BASEURL, "http://www.vendiadvertising.com");
//Bind the providers to the worker
worker.SetProviders(providers);
worker.Parse(reader);
}
}
doc.Close();
}
}
}
this.Close();
}
}
}
I'm attempting to create a PDF file from an HTML file. After looking around a little I've found: wkhtmltopdf to be perfect. I need to call this .exe from the ASP.NET server. I've attempted:
Process p = new Process();
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = HttpContext.Current.Server.MapPath("wkhtmltopdf.exe");
p.StartInfo.Arguments = "TestPDF.htm TestPDF.pdf";
p.Start();
p.WaitForExit();
With no success of any files being created on the server. Can anyone give me a pointer in the right direction? I put the wkhtmltopdf.exe file at the top level directory of the site. Is there anywhere else it should be held?
Edit: If anyone has better solutions to dynamically create pdf files from html, please let me know.
Update:
My answer below, creates the pdf file on the disk. I then streamed that file to the users browser as a download. Consider using something like Hath's answer below to get wkhtml2pdf to output to a stream instead and then send that directly to the user - that will bypass lots of issues with file permissions etc.
My original answer:
Make sure you've specified an output path for the PDF that is writeable by the ASP.NET process of IIS running on your server (usually NETWORK_SERVICE I think).
Mine looks like this (and it works):
/// <summary>
/// Convert Html page at a given URL to a PDF file using open-source tool wkhtml2pdf
/// </summary>
/// <param name="Url"></param>
/// <param name="outputFilename"></param>
/// <returns></returns>
public static bool HtmlToPdf(string Url, string outputFilename)
{
// assemble destination PDF file name
string filename = ConfigurationManager.AppSettings["ExportFilePath"] + "\\" + outputFilename + ".pdf";
// get proj no for header
Project project = new Project(int.Parse(outputFilename));
var p = new System.Diagnostics.Process();
p.StartInfo.FileName = ConfigurationManager.AppSettings["HtmlToPdfExePath"];
string switches = "--print-media-type ";
switches += "--margin-top 4mm --margin-bottom 4mm --margin-right 0mm --margin-left 0mm ";
switches += "--page-size A4 ";
switches += "--no-background ";
switches += "--redirect-delay 100";
p.StartInfo.Arguments = switches + " " + Url + " " + filename;
p.StartInfo.UseShellExecute = false; // needs to be false in order to redirect output
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true; // redirect all 3, as it should be all 3 or none
p.StartInfo.WorkingDirectory = StripFilenameFromFullPath(p.StartInfo.FileName);
p.Start();
// read the output here...
string output = p.StandardOutput.ReadToEnd();
// ...then wait n milliseconds for exit (as after exit, it can't read the output)
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
// if 0 or 2, it worked (not sure about other values, I want a better way to confirm this)
return (returnCode == 0 || returnCode == 2);
}
I had the same problem when i tried using msmq with a windows service but it was very slow for some reason. (the process part).
This is what finally worked:
private void DoDownload()
{
var url = Request.Url.GetLeftPart(UriPartial.Authority) + "/CPCDownload.aspx?IsPDF=False?UserID=" + this.CurrentUser.UserID.ToString();
var file = WKHtmlToPdf(url);
if (file != null)
{
Response.ContentType = "Application/pdf";
Response.BinaryWrite(file);
Response.End();
}
}
public byte[] WKHtmlToPdf(string url)
{
var fileName = " - ";
var wkhtmlDir = "C:\\Program Files\\wkhtmltopdf\\";
var wkhtml = "C:\\Program Files\\wkhtmltopdf\\wkhtmltopdf.exe";
var p = new Process();
p.StartInfo.CreateNoWindow = true;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = wkhtml;
p.StartInfo.WorkingDirectory = wkhtmlDir;
string switches = "";
switches += "--print-media-type ";
switches += "--margin-top 10mm --margin-bottom 10mm --margin-right 10mm --margin-left 10mm ";
switches += "--page-size Letter ";
p.StartInfo.Arguments = switches + " " + url + " " + fileName;
p.Start();
//read output
byte[] buffer = new byte[32768];
byte[] file;
using(var ms = new MemoryStream())
{
while(true)
{
int read = p.StandardOutput.BaseStream.Read(buffer, 0,buffer.Length);
if(read <=0)
{
break;
}
ms.Write(buffer, 0, read);
}
file = ms.ToArray();
}
// wait or exit
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
return returnCode == 0 ? file : null;
}
Thanks Graham Ambrose and everyone else.
OK, so this is an old question, but an excellent one. And since I did not find a good answer, I made my own :) Also, I've posted this super simple project to GitHub.
Here is some sample code:
var pdfData = HtmlToXConverter.ConvertToPdf("<h1>SOO COOL!</h1>");
Here are some key points:
No P/Invoke
No creating of a new process
No file-system (all in RAM)
Native .NET DLL with intellisense, etc.
Ability to generate a PDF or PNG (HtmlToXConverter.ConvertToPng)
Check out the C# wrapper library (using P/Invoke) for the wkhtmltopdf library: https://github.com/pruiz/WkHtmlToXSharp
There are many reason why this is generally a bad idea. How are you going to control the executables that get spawned off but end up living on in memory if there is a crash? What about denial-of-service attacks, or if something malicious gets into TestPDF.htm?
My understanding is that the ASP.NET user account will not have the rights to logon locally. It also needs to have the correct file permissions to access the executable and to write to the file system. You need to edit the local security policy and let the ASP.NET user account (maybe ASPNET) logon locally (it may be in the deny list by default). Then you need to edit the permissions on the NTFS filesystem for the other files. If you are in a shared hosting environment it may be impossible to apply the configuration you need.
The best way to use an external executable like this is to queue jobs from the ASP.NET code and have some sort of service monitor the queue. If you do this you will protect yourself from all sorts of bad things happening. The maintenance issues with changing the user account are not worth the effort in my opinion, and whilst setting up a service or scheduled job is a pain, its just a better design. The ASP.NET page should poll a result queue for the output and you can present the user with a wait page. This is acceptable in most cases.
You can tell wkhtmltopdf to send it's output to sout by specifying "-" as the output file.
You can then read the output from the process into the response stream and avoid the permissions issues with writing to the file system.
My take on this with 2018 stuff.
I am using async. I am streaming to and from wkhtmltopdf. I created a new StreamWriter because wkhtmltopdf is expecting utf-8 by default but it is set to something else when the process starts.
I didn't include a lot of arguments since those varies from user to user. You can add what you need using additionalArgs.
I removed p.WaitForExit(...) since I wasn't handling if it fails and it would hang anyway on await tStandardOutput. If timeout is needed, then you would have to call Wait(...) on the different tasks with a cancellationtoken or timeout and handle accordingly.
public async Task<byte[]> GeneratePdf(string html, string additionalArgs)
{
ProcessStartInfo psi = new ProcessStartInfo
{
FileName = #"C:\Program Files\wkhtmltopdf\wkhtmltopdf.exe",
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
Arguments = "-q -n " + additionalArgs + " - -";
};
using (var p = Process.Start(psi))
using (var pdfSream = new MemoryStream())
using (var utf8Writer = new StreamWriter(p.StandardInput.BaseStream,
Encoding.UTF8))
{
await utf8Writer.WriteAsync(html);
utf8Writer.Close();
var tStdOut = p.StandardOutput.BaseStream.CopyToAsync(pdfSream);
var tStdError = p.StandardError.ReadToEndAsync();
await tStandardOutput;
string errors = await tStandardError;
if (!string.IsNullOrEmpty(errors)) { /* deal/log with errors */ }
return pdfSream.ToArray();
}
}
Things I haven't included in there but could be useful if you have images, css or other stuff that wkhtmltopdf will have to load when rendering the html page:
you can pass the authentication cookie using --cookie
in the header of the html page, you can set the base tag with href pointing to the server and wkhtmltopdf will use that if need be
Thanks for the question / answer / all the comments above. I came upon this when I was writing my own C# wrapper for WKHTMLtoPDF and it answered a couple of the problems I had. I ended up writing about this in a blog post - which also contains my wrapper (you'll no doubt see the "inspiration" from the entries above seeping into my code...)
Making PDFs from HTML in C# using WKHTMLtoPDF
Thanks again guys!
The ASP .Net process probably doesn't have write access to the directory.
Try telling it to write to %TEMP%, and see if it works.
Also, make your ASP .Net page echo the process's stdout and stderr, and check for error messages.
Generally return code =0 is coming if the pdf file is created properly and correctly.If it's not created then the value is in -ve range.
using System;
using System.Diagnostics;
using System.Web;
public partial class pdftest : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
private void fn_test()
{
try
{
string url = HttpContext.Current.Request.Url.AbsoluteUri;
Response.Write(url);
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName =
#"C:\PROGRA~1\WKHTML~1\wkhtmltopdf.exe";//"wkhtmltopdf.exe";
startInfo.Arguments = url + #" C:\test"
+ Guid.NewGuid().ToString() + ".pdf";
Process.Start(startInfo);
}
catch (Exception ex)
{
string xx = ex.Message.ToString();
Response.Write("<br>" + xx);
}
}
protected void btn_test_Click(object sender, EventArgs e)
{
fn_test();
}
}