Interop.Outlook Print to File (PDF)? - html

There are many libraries out there which purport to transform HTML to PDF. All that I've looked at have there limitations. We don't want to spend any money on this so wanted to know if it is possible to print to file in PDF format without all the pop ups that Outlook would normally produce. We are using Outlook 2013 with Exchange.
This thread suggests that the answer it NO. But this thread suggests that it might be done. I'm looking for a clear path to achieve my goal.
To complicate things, I am using the Mail.Display function to allow the user to modify the email before sending. They can add attachments if they want also. Once they select the Send option, I want to capture the email that was sent and produce a PDF which will be stored in a data store for easy retrieval by anyone who accesses the customer account. Here is where I run into difficulty. The Mail object is not available after returning from the Display function. How can I get the sent email and process it?

Yes, it is possible.
Outlook uses Word as an email editor. So, you can use the Word object model to get the job done. The WordEditor property of the Inspector class returns an instance of the Document class from the Word object model which represents the message body. See Chapter 17: Working with Item Bodies for more information.
The ExportAsFixedFormat method of the Document class saves the document in PDF or XPS format.

Related

Send Email with all new OneNote entries

I use Microsoft OneNote daily to take notes. I would like to write a script to send myself an email every night with all the new notes I took that day across notebooks so I can review them. This would usually be straightforward in e.g. a Word doc where I can timestamp all saves and take the latest file, diff it with the last file from the previous day and send the diff. Unfortunately OneNote complicates this for at least two reasons:
OneNote autosaves and as far as I can tell does not offer the ability to rename saves or add a timestamp to the filename
Notebooks and pages mean changes are across "documents" instead of a single file that can be diff'd.
So I am looking for a solution that considers the complications above. Thanks.
The basic approach via the microsoft-graph API
./me/onenote/pages?$filter=lastModifiedDateTime ge yyyy-MM-ddThh:mm:ssZ&$expand=parentNotebook
will yield json data with
title - Page title
links/oneNoteWebUrl - allows opening of the onenote page in web browser
links/oneNoteClientUrl - allows opening of the onenote page in onenote app
parentNotebook/displayName - Notebook name
self - needed to get page content.
for small page numbers this may work but is likely to time out with a 504 error for a drive with many pages.
In that case a two stage approach is required.
./me/onenote/sections?$filter=lastModifiedDateTime ge yyyy-MM-ddThh:mm:ssZ
will return a list of all the sections that have been modified since the defined lastModifiedDateTime.
Next iterate through the returned json data and get pages modified since lastModifiedDateTime with the returned pagesUrls using the format
.me/onenote/sections/1-xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx/pages?$filter=lastModifiedDateTime ge yyyy-MM-ddThh:mm:ssZ&$expand=parentNotebook
yielding the same data as noted previously.
Once you have this data you can generate an email containing a list of the modified Notebooks,page names and page links.
If you need the actual page data(content) then you need to call
./me/onenote/pages/1-1c13bcbae2fdd747a95b3e5386caddf1!1-xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx/content?includeIDs=true&includeInkML=true&preAuthenticated=true
Which will give you text/html, ink and links to other resources from each page.

ColdFusion/Railo bulk email - Is there a better way?

My apologies if the answer is here on SO and I missed it. Anyway, I will give as much information as possible about my question.
I have a Railo CF server running on RHEL 6.6 that hosts an application to send out notifications and alerts via email. The emails are sent from the application to our internal cluster of Exchange servers using cfmail. For neatness and a few other reasons, all emails are sent via BCC.
The application was implemented to standardize the way important information is sent to the thousands of employees at my company. To do this, the application uses multiple form fields that require the end-user to enter specific information in each field. Once submitted the application then formats it into different templates depending on the content of the form fields. The end-user also has the capability to send file attachments.
The application works well, that is until the list of email addresses gets too large. Since the cfexchange tags do not currently have the capability to pull email distributions directly from Exchange, users must select email lists from either a pre-populated drop down or enter in individual email addresses in CSV to another form field. This for the most part is OK since the most used distributions are already in the drop down list. Unfortunately, some notifications sent by the application must be sent in CSV which can number in the thousands. When that happens, I get this error:
nested exception is:
class com.sun.mail.smtp.SMTPAddressFailedException: 452 #4.5.3 Too many recipients.
(If anyone knows if there is a fix for this via the admin or config files this may solve my issue without reading further.)
Thus I began my search for a better way. I first discovered this: need to slow mass email sending in coldfusion 4.5 however, the emails need to go out quickly and setting cfschedule to check constantly put a bit of a strain on the server. I could not find anything else here on SO that was relevant.
My google searches have not resulted in anything great. I found one suggestion of using cffile to write to the spool directory of either qmail or postfix on the application server. That method just seemed inefficient and unreliable to me. (However, if someone thinks this may work, please advise.)
I had one idea of; instead of immediately sending all those emails out through cfmail, to first insert the entire list into the local MySQL database and then writing CF code to query that database and somehow cfloop the cfmail tag. I couldn't come up with any code to do this without using cfschedule or forcing the user to wait until the process finished (Which could take a rather long time especially since the emails had to be relayed to our Exchange servers).
So, any help with this problem would be most welcome. Thank you!
EDIT
In response to just looping over the emails: This is something I considered. I know how to loop over a list of items but my concern was each email that is sent requires a login via the cfmail tag to Exchange since the application server does not log directly into our Windows domain. I could not figure out a reasonable way to test this since it would require sending out bulk emails.
The Exchange documentation on microsoft.com is vague at best. Unless I missed it, there does not seem to be a definitive answer on how many emails it can receive at a time before the 452 error code pops up. I could not find if this is something set from within the Exchange servers admin panel. I do not administrate the Exchange servers so I would have to contact that team at our company to find out.
EDIT #2
Adding in a code example for review. However, I am thinking the isValid() piece will fail if more than 1 email is passed at a time. Thoughts?
<cfloop index="bcc" list="#FORM.email_bcc#" delimiters = ", ">
<cfif isValid("email", bcc)>
<cfmail bcc="#bcc#"
from="#email_from#"
subject="#FORM.pre_email_subject# #Trim(FORM.subject)#"
type="html"
to="email#example.com">
Message goes here.
</cfmail>
<cfelse>
<cfmail
from="#email_from#"
failto="#fail_to#"
subject="FAILED #Trim(FORM.subject)#"
type="html"
to="#fail_to#">
</cfmail>
</cfif>
</cfloop>

Where is the Data stored on Website

I am at this website -
http://www.zoominfo.com/s/#!search/company/1.64.eyJjb21wYW55TmFtZSI6xIB2YWx1xIw6ImEiLCJpc1VzZWTEjXRyxJN9fQ%3D%3D
If you see the company name - Agilent Technologies Inc.
Its neither there in page source, nor in any json format.
But it does show in the Dom of Chrome Developer tool.
I have looked and analysed almost every requests that it sent, but still couldn't find where this data is saved.
By where the data is saved - I am looking to find where I can scrape that data from?
If by using python-requests and BeautifulSoup
I do see an XMLHTTPREQUEST made, not sure what that means, or if that is the clue to my answer.
I am still learning python, and it would be a very useful information if someone helps me with this.
Thanks in advance.
After the HTML is loaded, js requests for the data through an XMLHTTPREQUEST which is loaded right after the request is received on your client. That's why you see the DOM element right there using element inspector.
You didn't mention what goal you want to achieve or what tool you are using. Please be specific on your question. If you do not have any idea about this kind of pattern, google out angularjs, see some example.
do see an XMLHTTPREQUEST made, not sure what that means, or if that is the clue to my answer.
It means that javascript embedded in the page is sending an extra HHTP request to the web server. It is likely that the "Agilent Technologies Inc." text is being returned in the server's response to that request, and the javascript in the page is then injecting the text into the DOM in the appropriate place.
Where is the Data stored on Website
That is a completely different question ...
(You have already noted that the data (e.g. the company name) gets injected into the page displayed by your browser.)
On the server side, the data could be stored in the web server (or its back-end systems) in a variety of ways. Or it might not be stored at all. There is no way of knowing ... without looking at the server-side code and configurations.

Drupal 7 (VERY) Custom Preview

I have a drupal site that is being used strictly as a CMS that produces JSON feeds using services and services_views, which are consumed by a separate site. What I would like to do (and I have a working proof of concept of this) is allow for a "live preview" on the real site, by intercepting the node form preview / submit, encoding the node as JSON, and loading a special page on the live site that consumes that JSON and displays the page accordingly.
The problem with this JSONized node is, it's different from the JSON being produced by my view (using services_views). My end goal is to produce JSON that is identical for both previewed and non-previewed objects, without having to maintain separate output methods (I could easily hand-customize the json but then when my view for the public api changes I have to make the same changes to the preview json. Trying to avoid this).
I'm looking for feedback on this approach. Is what I'm attempting even possible? The ideas I've been able to come up with so far are:
being able to (conditionally) drive my view with data from a non-databse source
sneakily inserting data into the view object during one of the stages of execution? Kludgy but I'm not above that :)
saving a "clone" node (or revision?) of the node being previewed and let the view use that to display the preview JSON?
Maybe this is the wrong approach altogether and there's something better? (Trying to intercept and format the services output in my module... maybe avoid services_views altogether?)
If anyone can offer some advice, insight or opinions on how to best proceed here, I'd be really grateful.
in a custom module, you could set up a page that grabs the json output from the view page.
$JSON = file_get_contents($url);
that way the preview stays bound to the view, even if the view changes.
First I think it's not an easy task what you are trying to achieve. So before all, good luck.
I think you could intercept the node submission data, then create a node programatically, then render that node, and then export the rendered node to JSON. Inmediately after you get the JSON, delete this node, because the programmatically created node is only for preview.
This task could be more CPU demanding but think that previewing content exactly as the content will look is difficult.
Your rss feeds that your site reads could be filtered with some parameter to avoid programmatically created nodes (prewiew nodes), despite these nodes will be available for a very short time.

Importing csv-style text attachment from gmail into any kind of db/spreadsheet automatically?

Every hour, I get a csv style file (delimited by | (pipe)) delivered via email to a gmail address with a few rows of stuff like 12X98XJ|75.00|0.00||0.00|23.15
I'd like to automatically import to/update a database. I was thinking Google Docs "email to docs" functionality. Except helpfully they seem to have disabled that now.
I feel there MUST be a simple method in existence that does what I want.
Once it's in something where I can get at it with an API, it's plain sailing from then on.
Even something as simple as importing to Amazon SimpleDB would do.
But a good half a day of Googling just leads down disappointing paths.
Two notes:
All email functions are disabled on my server, so the .py scripts I found to retrieve from a local mail store file aren't going to work.
Don't ask me why the data is given to me in such a cack-handed way. It's historical.
I seem to be working with people getting ready to migrate to windows 3.1
What about using the pair of fetchmail + procmail