I'm working on a SharePoint web part that displays a number of different reports in different divs on the page. In one of these divs, I need to display the HTML from a page we have stored in the 'Documents' container within SharePoint. The info in the HTML page is retrieved from several different parts of the application, and is displayed differently, so basically we're using it as the source data. I'm trying to figure out how to access the page from within the app and hopefully store the link to the file as a configurable setting so I can set it up for our dev/test/prod environments.
I've loaded the HTML file into the 'Documents' folder, and if I browse to it manually it displays fine but if I use the following:
SPSecurity.RunWithElevatedPrivileges(delegate
{
using (System.Net.WebClient client = new System.Net.WebClient())
{
string htmlCode = client.DownloadString(url);
}
}
I get a 403 error and in the response header the message, "Before opening files from this location you must first browse to the website and select the option to login automatically".
I thought the RunWithElevatedPriveleges would pass the credentials through but I'm pretty new to SharePoint. Not sure if I'm using the right approach, any help is appreciated.
Put the pages into a standard document library, then use the page viewer web part. The site asset library is used for other customization purposes. You don't even need SharePoint Designer. Page viewer should be set as a "Web Page" because the web page viewer becomes essentially an IFRAME.
If still trouble... it may be a setting at the Web Application level thats causing issues with non-Microsoft files
Go to Central Admin > Manage Web Applications. I then chose my Web Application and clicked on the "General Settings" button. I then changed the Browser File Handling from "Strict" to "Permissive" and that fixed my issue. I've included an attachment of the setting so you can read the text associated with it.
Figured it out. There were a number of permissions problems but once those were sorted this code worked:
using (SPSite site = new SPSite(SPContext.Current.Site.ID))
{
using (SPWeb web = site.OpenWeb())
{
SPFolder folder = web.GetFolder("MainFolder/Lists/MainFolderDocs");
if (folder.Exists)
{
SPFile file = web.GetFile("/MainFolder/Lists/MainFolderDocs/Mainlist.html");
if (file.Exists)
{
using (System.IO.StreamReader reader = new System.IO.StreamReader(file.OpenBinaryStream()))
{
string htmlCode = reader.ReadToEnd();
lChecklist.Text = htmlCode;
}
}
}
}
}
Related
I have the basic shell of a Chrome extension done and have come to the point where I am trying to inject an HTML signature into Gmail using code hosted on an unindexed page on my site. The reason I want to do this is to be able to include web fonts, something that for the life me I can't figure out why Gmail hasn't allowed you to do from their font library.
In any regard, as I said, I have a right-click context menu option ready to trigger a script from my js function page and the extension loads without errors. I need to figure out the best way to inject the HTML into the email and without losing any of the formatting that has been done on the page.
I have created the extension manifest, set the permissions on the context menu and created a function to call back to the js page that will inject the signature.
var contextMenus = {};
contextMenus.createSignature =
chrome.contextMenus.create(
{"title": "Inject Signature",
"contexts": ["editable"]},
function (){
if(chrome.runtime.lastError){
console.error(chrome.runtime.lastError.message);
}
}
);
chrome.contextMenus.onClicked.addListener(contextMenuHandler);
function contextMenuHandler(info, tab){
if(info.menuItemId===contextMenus.createSignature){
chrome.tabs.executeScript({
file: 'js/signature.js'
});
}
}
The end result is nothing enters the page and get massive errors related to cross-site because the domain is not the same obviously. This has obviously been solved as there are numerous signature extensions out there. I would probably use one of theirs but a) I want to build it on my own, b) they all want you to use their templates, none of them that I have seen will let you just use your own code.
So, any ideas?
I am trying to automate a test case in Chrome where I would like to upload an attachment to an email. I use desiredCaps['browserName'] = 'Chrome'. While clicking attachments in email, it opens the Documents in the phone, but I am unable to detect the elements in the Documents screen.
Try this.If you are using ruby
This basically goes into the directory called screenshots and finds the second picture or the document that is visible inside the directory
find_element(id: "screenshots").find_element(class: "android.widget.ImageView[2]").click
end
This captures the first document/picture visible in the gallery
find_element(id: "").find_element(class: "android.widget.ImageView").click
You can modify it as per your requirement
You shoud change context to from Chromium to 'NATIVE_APP' appium doc about it (http://appium.io/docs/en/writing-running-appium/web/hybrid/) and use Touch Actions for choose you file
In Java, you can use the below code to switch context.
Set<String> contextNames = driver.getContextHandles();
for (final String contextName : contextNames) {
if (contextName.contains("NATIVE")) {
driver.context(contextName);
System.out.println("Switched to Native Context");
}
}
in Python you can try something like this
contextNames = driver.contexts
for aContext in contextNames
if "NATIVE" in aContext:
driver.switch_to.context(aContext)
I built a chrome extension that saves data to localStorage from the background page (using the chrome.storage.sync.set).
Now, say that I want to build a website and access to the extension's data on the localStorage from the website, is it possible to access this data from the website domain? maybe I can add something to the manifest file to allow that?
You would have to inject a content script into your web site and then have your background script pass the localStorage to your content script. As for communication between your content script and the script on your web site, you'll have to get creative.
I am assuming here that you are aware of the message passing procedures between the content script and background script.
Now, I don't think your website can make a request to the extension and "pull" data from it, but you can certainly have your extension "push" data to your website.
This is how you can do it:
Content Script
The content script should check if the site open in the website is your website, say www.yourwebsite.com
if (currentUrl == "www.yourwebsite.com")
{
....
}
If it is indeed your website, pull the required data from the background script
if (currentUrl == "www.yourwebsite.com")
{
chrome.extension.sendRequest({ "getLocalStorageData": true, "dataFieldName": "favouriteColor" }, handleLocalStorageResult);
}
function handleLocalStorageResult(dataValue)
{
.....
}
On receiving the data in method handleLocalStorageResult, inject the data into the page's html so that your website's javascript can read it
if (currentUrl == "www.yourwebsite.com")
{
chrome.extension.sendRequest({ "getLocalStorageData": true, "dataFieldName": "favouriteColor" }, handleLocalStorageResult);
}
function handleLocalStorageResult(dataValue)
{
var localStorageDataDiv = $('<div>').appendTo('body');
localStorageDataDiv.attr('id', 'extensionData');
localStorageDataDiv.html(dataValue);
}
Your WebSite's Javascript
Now your website's javascript can read the data
var data = $('#extensionData').html();
alert('My Extension's LocalStorage Data is ' + data);
Answers to date do not mention that content scripts and the webpage share their localStorage object. If your content script writes to localStorage, the webpage will be later able to read it, and vice versa.
I want to ask how to embed DWG file in HTML Page.
I have tried using tag with Volo Viewer but this solution run only in IE not in Firefox and Chrome.
Dwgview-x can do that, but it will need to be installed as a plug-in on client computers so that anyone can view the dwg file that you embed online.
There may be third party ActiveX controls that you could use, but I think ultimately you will find that it's not practical for drawing files of even average complexity. I recommend to create DWF (if you need vector format) or PNG files on demand (using e.g. the free DWG TrueView from http://usa.autodesk.com/design-review/ ) and embed those instead.
I use DWG Browser. Its a stand alone program that is used for reporting and categorizing drawings with previews. It saves exports in html too.
They have a free demo download available.
http://www.graytechnical.com/software/dwg-browser/
You'll find what I think is the latest information on Autodesk's labs site here: http://labs.blogs.com/its_alive_in_the_lab/2014/01/share-your-autodesk-360-designs-on-company-web-sites.html
It looks like a DWG can be embeded there is an example on this page, but clearly DWF is the way to go.
You can embed DWG file's content in an HTML page by rendering the file's pages as HTML pages or images. If you find it an attractive solution then you can do it using GroupDocs.Viewer API that allows you to render the document pages as HTML pages, images, or a PDF document as a whole. You can then include the rendered HTML/image pages or whole PDF document in your HTML page.
Using C#
ViewerConfig config = new ViewerConfig();
config.StoragePath = "D:\\storage\\";
// Create HTML handler (or ViewerImageHandler for rendering document as image)
ViewerHtmlHandler htmlHandler = new ViewerHtmlHandler(config);
// Guid implies that unique document name
string guid = "sample.dwg";
// Get document pages in html form
List<PageHtml> pages = htmlHandler.GetPages(guid);
// Or Get document pages in image form using image handler
//List<PageImage> pages = imageHandler.GetPages(guid);
foreach (PageHtml page in pages)
{
// Get HTML content of each page using page.HtmlContent
}
Using Java
// Setup GroupDocs.Viewer config
ViewerConfig config = new ViewerConfig();
// Set storage path
config.setStoragePath("D:\\storage\\");
// Create HTML handler (or ViewerImageHandler for rendering document as image)
ViewerHtmlHandler htmlHandler = new ViewerHtmlHandler(config);
String guid = "Sample.dwg"
// Get document pages in HTML form
List<PageHtml> pages = htmlHandler.getPages(guid);
for (PageHtml page : pages) {
// Get HTML content of each page using page.getHtmlContent
}
Disclosure: I work as a Developer Evangelist at GroupDocs.
I am trying to use HTML5 Appcache to speed up my web-mobile app by caching images and css/JS files. The app is based on dynamic web pages.
As already known – when using Appcache the calling html page is always cached -> bad for dynamic websites.
My solution - Create a first static page and in this page call the manifest file (manifest="cache.appcache") and load all my cached content. Then when the user is redirected to another dynamic page the resources will already be available. (Of course this second dynamic page will not have the manifest tag).
The problem is that if the second page is refreshed by the user, the resources are not loaded from the cache; they are loaded directly from the server!
This solution is very similar to using an Iframe on the first dynamic file. I found that the Iframe solution have the exact same problem.
Is there any solution for that? Can Appcache really be used with dynamic content?
Thanks
Yes appcache can be used for dynamic content if you handle you url parameters differently.
I solved this by using local storage (I used the jquery localstorage plugin to help with this).
The process is
Internally from the page when you would normally href from an anchor or redirect, instead call a function to redirects for you. This function stores the parameters from the url to localstorage, and then only redirects to the url without the parameters.
On the receiving target page. Get the parameters from localstorage.
Redirect code
function redirectTo(url) {
if (url.indexOf('?') === -1) {
document.location = url;
} else {
var params = url.split('?')[1];
$.localStorage.set("pageparams", params);
document.location = url.split('?')[0];
};
}
Target page code
var myParams = GetPageParamsAsJson();
var page = myParams.page;
function GetPageParamsAsJson() {
return convertUrlParamsToJson($.localStorage.get('pageparams'));
}
function convertUrlParamsToJson(params) {
if (params) {
var json = '{"' + decodeURI(params).replace(/"/g, '\\"').replace(/&/g, '","').replace(/=/g, '":"') + '"}';
return JSON.parse(json);
}
return [];
}
I had a hell of a time figuring out how to cache dynamic pages accessed by a URI scheme like this:
domain.com/admin/page/1
domain.com/admin/page/2
domain.com/admin/page/3
Now the problem is that the appcache won't cache each individual admin/page/... unless you visit it.
What I did was use the offline page to represent these pages that you may want to allow a user to access offline.
The JS in the offline page looks up the URI and scrapes it to find out which page it should show and fetches the data from localStorage which was populated with all the page data when the user visited the admin dashboard before being presented with the links to each individual page.
I'm open to other solutions but this is all I could figure out to bring a bunch of separate pages offline with only visiting the single admin page.