This is what RSS looks like: https://reddit.0qz.fun/r/dankmemes/top.json
My script perfectly parses "title", "description" and other items tags from the RSS. But it doesn't parse "content:encoded".
I tried this:
item.getChild("content:encoded").getText();
And this:
item.getChild("encoded").getText();
And this (found on Stackoverflow):
item.getChild("http://purl.org/rss/1.0/modules/content/","encoded").getText();
But nothing works... Could you help me?
The namespace is important for the getChild and similar methods to parse the content successfully.
Your third example is close, but you have the parameter order backwards, and you need to use the XmlService.getNamespace method, not a raw string. (The signature is getChild(string, namespace), not getChild(string, string).)
This one is tricky as the namespace should be included for some of the elements, and not for others. I am not an XML expert, so I don't know if this is expected behavior or not. The minimal example script below does find and log the text of the <content:encoded> elements using getChild, but I was only able to figure out when to include or exclude the namespace through trial and error. (If anyone has further info on why this is, please let me know in the comments.)
function logContentEncoded() {
const result = UrlFetchApp.fetch("https://reddit.0qz.fun/r/dankmemes/top.json");
const document = XmlService.parse(result.getContentText());
const root = document.getRootElement();
const namespace = XmlService.getNamespace("http://purl.org/rss/1.0/modules/content/");
const channel = root.getChild("channel"); // fails if namespace is included
const item = channel.getChild("item"); // fails if namespace is included
const encoded = item.getChild("encoded", namespace); // fails if namespace is EXCLUDED
console.log(encoded.getText());
}
Adding this library to the project: 1Mc8BthYthXx6CoIz90-JiSzSafVnT6U3t0z_W3hLTAX5ek4w0G_EIrNw
You can scrape the page. With this code, i.e., You can get the first content of <content:encoded> tags.
function getDataFromJson() {
var url = "https://reddit.0qz.fun/r/dankmemes/top.json";
var fromText = '<content:encoded>';
var toText = '</content:encoded>';
var content = UrlFetchApp.fetch(url).getContentText();
var scraped = Parser
.data(content)
.from(fromText)
.to(toText)
.build();
Logger.log(scraped);
return scraped;
}
So, using PyQt5's QWebEngineView and the .setHTML and .setContent methods have a 2 MB size limitation. When googling for solutions around this, I found two methods:
Use SimpleHTTPServer to serve the file. This however gets nuked by a firewall employed in the company.
Use File Urls and point to local files. This however is a rather bad solution, as the HTML contains confidential data and I can't leave it on the harddrive, under any circumstance.
The best solution I currently see is to use file urls, and get rid of the file on program exit/when loadCompleted reports it is done, whichever comes first.
This is however not a great solution and I wanted to ask if there is a solution I'm overlooking that would be better?
Why don't you load/link most of the content through a custom url scheme handler?
webEngineView->page()->profile()->installUrlSchemeHandler("app", new UrlSchemeHandler(e));
class UrlSchemeHandler : public QWebEngineUrlSchemeHandler
{ Q_OBJECT
public:
void requestStarted(QWebEngineUrlRequestJob *request) {
QUrl url = request->requestUrl();
QString filePath = url.path().mid(1);
// get the data for this url
QByteArray data = ..
//
if (!data.isEmpty())
{
QMimeDatabase db;
QString contentType = db.mimeTypeForFileNameAndData(filePath,data).name();
QBuffer *buffer = new QBuffer();
buffer->open(QIODevice::WriteOnly);
buffer->write(data);
buffer->close();
connect(request, SIGNAL(destroyed()), buffer, SLOT(deleteLater()));
request->reply(contentType.toUtf8(), buffer);
} else {
request->fail(QWebEngineUrlRequestJob::UrlNotFound);
}
}
};
you can then load a website by webEngineView->load(new QUrl("app://start.html"));
All relative pathes from inside will also be forwarded to your UrlSchemeHandler..
And rember to add the respective includes
#include <QWebEngineUrlRequestJob>
#include <QWebEngineUrlSchemeHandler>
#include <QBuffer>
One way you can go around this is to use requests and QWebEnginePage's method runJavaScript:
web_engine = QWebEngineView()
web_page = web_engine.page()
web_page.setHtml('')
url = 'https://youtube.com'
page_content = requests.get(url).text
# document.write writes a string of text to a document stream
# https://developer.mozilla.org/en-US/docs/Web/API/Document/write
# And backtick symbol(``) is for multiline strings
web_page.runJavaScript('document.write(`{}`);'.format(page_content))
I'm having trouble accessing a text file that is packaged with my Windows Phone 8 app.
Here is the code:
var ResrouceStream = Application.GetResourceStream(new Uri("Data-Test.docx", UriKind.Relative));
if (ResrouceStream != null)
{
Stream myFileStream = ResrouceStream.Stream;
if (myFileStream.CanRead)
{
// logiic here
retrun "Hi";
}
}
else
{
return "hello";
}
Seems simple but the app always returns "hello". i have placed the file in root and also in assets, changed it to content - copy and do not copy, resource copy and do not copy but always it returns "hello".
Spent several hours on this and all solutions I can find show the solution or very similar above!
What am I doing wrong?
EDIT: Returns "hello" when I deploy to phone or emulator.
also tried "/Data-Test...", #"\Data-Text..., #/"Data-Test...!
UPDATE 1:
string aReturn = "";
var asm = Assembly.GetExecutingAssembly();
//Use this to verify the namespacing of the "Embedded Resource".
//asm.GetManifestResourceNames()
// .ToList()
// .ForEach(name => Debug.WriteLine(name));
var ResourceStream = asm.GetManifestResourceStream("ContosoSocial.Assets.QuizQuestions.QuizQuestions-Test1.docx");
if (ResourceStream != null) // <--CHECKED AND DOES NOT EQUAL NULL
{
Stream myFileStream = ResourceStream;
if (myFileStream.CanRead) // <-- CHEACKED AND CAN READ
{
StreamReader myStreamReader = new StreamReader(myFileStream);
LOGIC & EXCEPTION HERE...?
string myLine = myStreamReader.ReadLine();
}
else
{
aReturn = "myFileStream.CanRead = true";
}
}
else
{
aReturn = "stream equals null";
}
Debug.WriteLine(aReturn);
}
The assignment of myFileStream to a StreamReader object is throwing the exception null pointer. I thought I would wrap myFileStream to a StreamReader so I can read a line at a time..? This is my first c# project and I'm unfamiliar with it's syntax and classes.
UPDATE 2:
OK I added...
Debug.WriteLine(aReturn);
...following...
string myLine = myStreamReader.ReadLine();
...and noticed it was retrieving only the 2 characters 'PK' !
So saved the .docx file as .txt and reinserted adn changed build copy to embedded - do not copy...Happy days it now pulls off the first line in the file.
Thanks to OmegaMan for your help with this one :-)
Change file type in the project to Embedded Resource
Extract the resource by working the namespace to its location. Here is an example code where I pull in an XSD:
Code:
var asm = Assembly.GetExecutingAssembly();
// Use this to verify the namespacing of the "Embedded Resource".
// asm.GetManifestResourceNames()
// .ToList()
// .ForEach(name => Debug.WriteLine(name));
var f1 = asm.GetManifestResourceStream("UnitTests.Resources.NexusResponse.xsd");
Note this is not tested on WP8, but GetExecutingAssembly is stated to work within .Net. If you get the namespace wrong, uncomment out the code and display or debug to determine the resources and their namespace.
There is Windows.Storage.FileIO.WriteBufferAsync method, but it's not supported on WP8. What's the right way to save a buffer to a storage file on a phone?
You have a couple options as to how you can proceed here. However, the extensions you'll need to make working with IBuffer objects easier are all located in the System.Runtime.InteropServices.WindowsRuntime namespace. You may also need the System.IO namespace for the OpenStreamForWriteAsync extension.
private async void SaveBuffer(Windows.Storage.Streams.IBuffer myBuffer)
{
Windows.Storage.StorageFile myFile = await Windows.Storage.StorageFile.GetFileFromPathAsync("...");
using (var writeStream = await myFile.OpenStreamForWriteAsync())
{
// Option 1: Cast to stream and copy
myBuffer.AsStream().CopyTo(writeStream);
// Option 2: Cast to byte array and write
var content = myBuffer.ToArray();
writeStream.Write(content, 0, content.Length);
}
}
Ref:
IBuffer.AsStream Extension
IBuffer.ToArray Extension
I'm attempting to create a PDF file from an HTML file. After looking around a little I've found: wkhtmltopdf to be perfect. I need to call this .exe from the ASP.NET server. I've attempted:
Process p = new Process();
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = HttpContext.Current.Server.MapPath("wkhtmltopdf.exe");
p.StartInfo.Arguments = "TestPDF.htm TestPDF.pdf";
p.Start();
p.WaitForExit();
With no success of any files being created on the server. Can anyone give me a pointer in the right direction? I put the wkhtmltopdf.exe file at the top level directory of the site. Is there anywhere else it should be held?
Edit: If anyone has better solutions to dynamically create pdf files from html, please let me know.
Update:
My answer below, creates the pdf file on the disk. I then streamed that file to the users browser as a download. Consider using something like Hath's answer below to get wkhtml2pdf to output to a stream instead and then send that directly to the user - that will bypass lots of issues with file permissions etc.
My original answer:
Make sure you've specified an output path for the PDF that is writeable by the ASP.NET process of IIS running on your server (usually NETWORK_SERVICE I think).
Mine looks like this (and it works):
/// <summary>
/// Convert Html page at a given URL to a PDF file using open-source tool wkhtml2pdf
/// </summary>
/// <param name="Url"></param>
/// <param name="outputFilename"></param>
/// <returns></returns>
public static bool HtmlToPdf(string Url, string outputFilename)
{
// assemble destination PDF file name
string filename = ConfigurationManager.AppSettings["ExportFilePath"] + "\\" + outputFilename + ".pdf";
// get proj no for header
Project project = new Project(int.Parse(outputFilename));
var p = new System.Diagnostics.Process();
p.StartInfo.FileName = ConfigurationManager.AppSettings["HtmlToPdfExePath"];
string switches = "--print-media-type ";
switches += "--margin-top 4mm --margin-bottom 4mm --margin-right 0mm --margin-left 0mm ";
switches += "--page-size A4 ";
switches += "--no-background ";
switches += "--redirect-delay 100";
p.StartInfo.Arguments = switches + " " + Url + " " + filename;
p.StartInfo.UseShellExecute = false; // needs to be false in order to redirect output
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true; // redirect all 3, as it should be all 3 or none
p.StartInfo.WorkingDirectory = StripFilenameFromFullPath(p.StartInfo.FileName);
p.Start();
// read the output here...
string output = p.StandardOutput.ReadToEnd();
// ...then wait n milliseconds for exit (as after exit, it can't read the output)
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
// if 0 or 2, it worked (not sure about other values, I want a better way to confirm this)
return (returnCode == 0 || returnCode == 2);
}
I had the same problem when i tried using msmq with a windows service but it was very slow for some reason. (the process part).
This is what finally worked:
private void DoDownload()
{
var url = Request.Url.GetLeftPart(UriPartial.Authority) + "/CPCDownload.aspx?IsPDF=False?UserID=" + this.CurrentUser.UserID.ToString();
var file = WKHtmlToPdf(url);
if (file != null)
{
Response.ContentType = "Application/pdf";
Response.BinaryWrite(file);
Response.End();
}
}
public byte[] WKHtmlToPdf(string url)
{
var fileName = " - ";
var wkhtmlDir = "C:\\Program Files\\wkhtmltopdf\\";
var wkhtml = "C:\\Program Files\\wkhtmltopdf\\wkhtmltopdf.exe";
var p = new Process();
p.StartInfo.CreateNoWindow = true;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = wkhtml;
p.StartInfo.WorkingDirectory = wkhtmlDir;
string switches = "";
switches += "--print-media-type ";
switches += "--margin-top 10mm --margin-bottom 10mm --margin-right 10mm --margin-left 10mm ";
switches += "--page-size Letter ";
p.StartInfo.Arguments = switches + " " + url + " " + fileName;
p.Start();
//read output
byte[] buffer = new byte[32768];
byte[] file;
using(var ms = new MemoryStream())
{
while(true)
{
int read = p.StandardOutput.BaseStream.Read(buffer, 0,buffer.Length);
if(read <=0)
{
break;
}
ms.Write(buffer, 0, read);
}
file = ms.ToArray();
}
// wait or exit
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
return returnCode == 0 ? file : null;
}
Thanks Graham Ambrose and everyone else.
OK, so this is an old question, but an excellent one. And since I did not find a good answer, I made my own :) Also, I've posted this super simple project to GitHub.
Here is some sample code:
var pdfData = HtmlToXConverter.ConvertToPdf("<h1>SOO COOL!</h1>");
Here are some key points:
No P/Invoke
No creating of a new process
No file-system (all in RAM)
Native .NET DLL with intellisense, etc.
Ability to generate a PDF or PNG (HtmlToXConverter.ConvertToPng)
Check out the C# wrapper library (using P/Invoke) for the wkhtmltopdf library: https://github.com/pruiz/WkHtmlToXSharp
There are many reason why this is generally a bad idea. How are you going to control the executables that get spawned off but end up living on in memory if there is a crash? What about denial-of-service attacks, or if something malicious gets into TestPDF.htm?
My understanding is that the ASP.NET user account will not have the rights to logon locally. It also needs to have the correct file permissions to access the executable and to write to the file system. You need to edit the local security policy and let the ASP.NET user account (maybe ASPNET) logon locally (it may be in the deny list by default). Then you need to edit the permissions on the NTFS filesystem for the other files. If you are in a shared hosting environment it may be impossible to apply the configuration you need.
The best way to use an external executable like this is to queue jobs from the ASP.NET code and have some sort of service monitor the queue. If you do this you will protect yourself from all sorts of bad things happening. The maintenance issues with changing the user account are not worth the effort in my opinion, and whilst setting up a service or scheduled job is a pain, its just a better design. The ASP.NET page should poll a result queue for the output and you can present the user with a wait page. This is acceptable in most cases.
You can tell wkhtmltopdf to send it's output to sout by specifying "-" as the output file.
You can then read the output from the process into the response stream and avoid the permissions issues with writing to the file system.
My take on this with 2018 stuff.
I am using async. I am streaming to and from wkhtmltopdf. I created a new StreamWriter because wkhtmltopdf is expecting utf-8 by default but it is set to something else when the process starts.
I didn't include a lot of arguments since those varies from user to user. You can add what you need using additionalArgs.
I removed p.WaitForExit(...) since I wasn't handling if it fails and it would hang anyway on await tStandardOutput. If timeout is needed, then you would have to call Wait(...) on the different tasks with a cancellationtoken or timeout and handle accordingly.
public async Task<byte[]> GeneratePdf(string html, string additionalArgs)
{
ProcessStartInfo psi = new ProcessStartInfo
{
FileName = #"C:\Program Files\wkhtmltopdf\wkhtmltopdf.exe",
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
Arguments = "-q -n " + additionalArgs + " - -";
};
using (var p = Process.Start(psi))
using (var pdfSream = new MemoryStream())
using (var utf8Writer = new StreamWriter(p.StandardInput.BaseStream,
Encoding.UTF8))
{
await utf8Writer.WriteAsync(html);
utf8Writer.Close();
var tStdOut = p.StandardOutput.BaseStream.CopyToAsync(pdfSream);
var tStdError = p.StandardError.ReadToEndAsync();
await tStandardOutput;
string errors = await tStandardError;
if (!string.IsNullOrEmpty(errors)) { /* deal/log with errors */ }
return pdfSream.ToArray();
}
}
Things I haven't included in there but could be useful if you have images, css or other stuff that wkhtmltopdf will have to load when rendering the html page:
you can pass the authentication cookie using --cookie
in the header of the html page, you can set the base tag with href pointing to the server and wkhtmltopdf will use that if need be
Thanks for the question / answer / all the comments above. I came upon this when I was writing my own C# wrapper for WKHTMLtoPDF and it answered a couple of the problems I had. I ended up writing about this in a blog post - which also contains my wrapper (you'll no doubt see the "inspiration" from the entries above seeping into my code...)
Making PDFs from HTML in C# using WKHTMLtoPDF
Thanks again guys!
The ASP .Net process probably doesn't have write access to the directory.
Try telling it to write to %TEMP%, and see if it works.
Also, make your ASP .Net page echo the process's stdout and stderr, and check for error messages.
Generally return code =0 is coming if the pdf file is created properly and correctly.If it's not created then the value is in -ve range.
using System;
using System.Diagnostics;
using System.Web;
public partial class pdftest : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
private void fn_test()
{
try
{
string url = HttpContext.Current.Request.Url.AbsoluteUri;
Response.Write(url);
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName =
#"C:\PROGRA~1\WKHTML~1\wkhtmltopdf.exe";//"wkhtmltopdf.exe";
startInfo.Arguments = url + #" C:\test"
+ Guid.NewGuid().ToString() + ".pdf";
Process.Start(startInfo);
}
catch (Exception ex)
{
string xx = ex.Message.ToString();
Response.Write("<br>" + xx);
}
}
protected void btn_test_Click(object sender, EventArgs e)
{
fn_test();
}
}