Can specific export options be removed at the RDL level? - reporting-services

For each report I'd like to be able to specify either which export options are supported or which are not supported. There's a similar question here: SSRS - Disabling export options (eg. PDF) for individual reports
But those answers are either out of date or apply globally. Is there an option I can set within the RDL to limit export options to only excel, for example?
The client is running SSRS 2012.

I recently did this on my system (SSRS 2016) based on this blog post. The post states that it is for 2012/2014, so it should work for your case as well. The change is technically applied globally in that it is not specifically applied in the individual RDL, but you can restrict the export options for individual reports using this method.
To summarize, I added the following to my ReportViewer.aspx file:
<script language = "Javascript">
// Disable Export Options Per Report
//
// This JavaScript is used to disable export options in reports. In order to disable an option add another call to RemoveExportOptions
//
// Example: RemoveExportOption("/ReportProject1/Report1", "Excel");
//
// In order to implement this, just save the script in the Report.apsx page of the SSRS server. No reboot or restart is needed. It will just work :)
//
var timer;
var dueTime=0
var options = {
"excel":"Excel",
"xml":"XML file with report data",
"csv":"CSV (comma delimited)",
"pdf":"PDF",
"mhtml":"MHTML (web archive)",
"tiff":"TIFF file",
"word":"Word"
}
function RemoveCTLExportFormats(format)
{
dueTime += 50;
if(dueTime > 30000)
{
clearTimeout(timer);
return;
}
if (format.toLowerCase() == "all")
{
var obj=document.getElementsByTagName("table");
for(var i=0;i<obj.length;i++)
{
if (obj[i].title == "Export drop down menu")
{
obj[i].style.display="None"
}
}
}
else
{
var obj=document.getElementsByTagName("A");
for(var i=0;i<obj.length;i++)
{
if (obj[i].title == options[format])
{
obj[i].style.display="None"
}
}
}
timer=setTimeout("RemoveCTLExportFormats('" + format + "')",50);
}
function RemoveExportOption(report, format)
{
url = unescape(location.href).replace("&","").replace("+","")
if (url.indexOf(report) != -1)
{
timer2=setTimeout("RemoveCTLExportFormats('" + format.toLowerCase() + "')",50);
}
else
{
return;
}
}
// List the reports with the options you want removed. Leave special characters and spaces out of the report path:
//
// Example: /My New Report/Report Test 1 => /MyNewReport/ReportTest1
//
// If you want to remove multiple options, then list the report multiple times. "ALL" removes all the export options
//
// Options are (not case sensitive):
//
// ALL - Removes all the options
// Excel - Removes Excel
// XML - Removes XML file with report data
// CSV - Removes CSV (comma delimited)
// PDF - Removes PDF
// MHTML - Removes MHTML (web archive)
// TIFF - Removes TIFF file
// Word - Removes Word
//
RemoveExportOption("/Your/Report/Path/Here", "All");
RemoveExportOption("/Another/Report/RemoveWord", "Word");
</script>
I have not worked with SSRS 2012 but based on the blog post, that file might be called Report.aspx on 2012. You will want to of course change the report paths on those last 2 lines of the script to point to the report in question that needs exporting restricted.

Related

How would you create a downloadable pdf in a client side app?

One of our requirements for an admin tool is to create a form that can be filled and translated to a downloadable pdf file. (A terms and condition with blank input fields to be exact).
I did some googling and tried creating a form in html and css and converted it into a canvas using the html2canvas package. Then I used the jspdf package to convert it into a pdf file. The problem is that I cannot get it to fit and resize accordingly to an a4 format with correct margins. I'm sure I can get to a somewhat working solution if I spend some time on it.
However, my real question is how would you guys solution this? Is there a 3rd party app/service that does this exact thing? Or would you do all this in the server side? Our current app is using angular 7 with firebase as our backend.
Cheers!
I was able to use the npm package pdfmake to create a dynamic pdf based on user information the user provided while interacting with my form. (I was using React) It opened the pdf in a new tab and the user is able to save the pdf. In another application (still React),
I used the same package to create a receipt so you can customize the size of the "page". We created the pdf and used the getBase64() method and sent the pdf as an email attachement.
My service function:
getEvidenceFile(id: number, getFileContent: boolean) {
return this.http.get(environment.baseUrl + ‘upload’ + ‘/’ + id , {responseType: ‘blob’ as ‘json’})
.map(res => res);
}
My component function called from the selected item of a FileDownload…
FileDownload(event: any) {
// const blob = await this.callService.getEvidenceFile(event.target.value, true);
// const url = window.URL.createObjectURL(blob);
this.callService.getEvidenceFile(event.target.value, true).subscribe(data => {
var binaryData = [];
binaryData.push(data);
var downloadLink = document.createElement(‘a’);
downloadLink.href = window.URL.createObjectURL(new Blob(binaryData));
document.body.appendChild(downloadLink);
downloadLink.click();
});
}

Embed every video in a directory on webhost

This may sound silly... but is there any way to embed all videos in a directory to a webpage? I'm hosting some videos on my website but right now you have to manually browse the directory and just click a link to a video.
I know I can just embed those videos to a html page but is there any way to make it adapt automatically when I add new videos?
How you do this will depend on how you are building your server code and web page code, but the example below which is node and angular based does exactly what you are asking:
// GET: route to return list of upload videos
router.get('/video_list', function(req, res) {
//Log the request details
console.log(req.body);
// Get the path for the uploaded_video directory
var _p;
_p = path.resolve(__dirname, 'public', 'uploaded_videos');
//Find all the files in the diectory and add to a JSON list to return
var resp = [];
fs.readdir(_p, function(err, list) {
//Check if the list is undefined or empty first and if so just return
if ( typeof list == 'undefined' || !list ) {
return;
}
for (var i = list.length - 1; i >= 0; i--) {
// For each file in the directory add an id and filename to the response
resp.push(
{"index": i,
"file_name": list[i]}
);
}
// Set the response to be sent
res.json(resp);
});
});
This code is old in web years (i.e. about 3 years old) so the way node handles routes etc is likely different now but the concepts remains the same, regardless of language:
go to the video directory
get the lit of video files in it
build them into a JSON response and send them to the browser
browser extracts and displays the list
The browser code corresponding to the above server code in this case is:
$scope.videoList = [];
// Get the video list from the Colab Server
GetUploadedVideosFactory.getVideoList().then(function(data) {
// Note: should really do some type checking etc here on the returned value
console.dir(data.data);
$scope.videoList = data.data;
});
You may find some way to automatically generate a web page index from a directory, but the type of approach above will likely give you more control - you can exclude certain file names types etc quite easily, for example.
The full source is available here: https://github.com/mickod/ColabServer

Slow Loading of Large MongoDB Database in Node JS

I have created a webpage with Node JS, Express JS, Mongoose and D3 JS.
In the webpage, it contains 3 pull down menus: Department, Employee, Week.
The usage of the webpage is as follows:
When 'Department' is selected, 'Employee' menu will be filtered to show only those from the selected 'Department'. The same goes to 'Week' after 'Employee' is selected.
After the 3 menus are selected and 'PLOT' button is clicked, a line chart (using d3.js) will be plotted to show the employee working hours for the month.
MongoDB Json
{ dep: '1',
emp: 'Mr A',
week: 1,
hrs: [{
{1,8},
{2,10},
...
}]
}
Here are the snippets of my codes:
routes.js
// Connect the required database and collection
var dataAll = require('./models/dataModel');
module.exports = function(app) {
app.get('/api/data', function(req, res) {
dataAll.find({}, {}, function(err, dataRes) {
res.json(dataRes);
});
}
app.get('*', function(req,res) {
res.sendfile('./index.html');
}
}
index.html
... // More codes
<div id="menuSelect1"></div>
<div id="menuSelect2"></div>
<div id="menuSelect3"></div>
...
<script src="./display.js" type='text/javascript'></script>
... // More codes
display.js
//Menu (Department,Employee,Week) Information is gathered here
queue()
.defer(d3.json, "/api/data")
.await(createPlot);
function createPlot(error, plotData) {
var myData = plotData;
var depData = d3.nest()
.key(function(d) {return d.dep;})
.rollup(function(v) {return v.length;})
.entries(myData);
selectField1 = d3.select('#menuSelect1')
.append("select")
.on("change", menu1change)
.selectAll(depData)
.enter()
.append("option")
.attr("value", function(d) {return d.key;})
.text(function(d) {return d.key;});
function menu1Change() {
//Filter Next Menu with the option chosen in this menu
... // More codes
var selectedVal = this.options[this.selectedIndex].value;
var empData = dataSet.filter(function(d) { return d.emp = selectString; });
... // More codes
}
... // More codes
}
Problem:
Functionally, it is working as expected. Problem is when the database is getting larger and larger, the loading of the page becomes very very slow (mins to load). I believe it should be due to the routing where all data is retrieved (.find({},{})) but I thought I need it because I am using it in 'display.js' to filter my menu options.
Is there a better way to do this to resolve the performance issue?
It is rarely necessary to send all the data to the client. In fact, I haven't seen an API with a single endpoint that returns the entire database to everyone.
It's hard to give you any specific solution not knowing how your data looks like, how large it is, how fast it grows etc. The performance issues may be related to querying the database, to large data transfer, or large JSON to parse by the browser.
In any case, you shouldn't send all your database to the client with no limits. Usually it is implemented with a number of records to skip and a maximum number of records to return.
Some frameworks like LoopBack does it for you, see:
https://docs.strongloop.com/display/public/LB/Skip+filter
https://docs.strongloop.com/display/public/LB/Limit+filter
If you're using Express then you'll have to implement the limits yourself.
To test the bottleneck, you can run the Mongo shell and try to run the .find({},{}) query from there to see how long it takes. You can see the transfer size and time in the browser's developer tools. This may find you narrow down the place that needs most attention, but returning the entire database no matter how large it is, is already a good place to start.

Google Picker not selecting by file extension

Given files in Drive with an (arbitrary) extension *.abc, this code...
gapi.load("picker", { "callback": function () {
if (!picker) {
var view = new google.picker.DocsView(google.picker.ViewId.DOCS);
view.setMimeTypes("application/vnd.google.drive.ext-type.abc");
view.setMode(google.picker.DocsViewMode.LIST);
picker = new google.picker.PickerBuilder();
picker.setTitle(TEXT.PICKER_PROMPT);
picker.setAppId(CONST.APP_ID);
picker.addView(view);
picker.setOAuthToken(session.OAuthToken.access_token);
picker.setCallback(pickerCallback);
picker.setInitialView(view);
};
picker.build().setVisible(true);
));
...doesn't find any of the existing 'abc' files in drive.
These files are of mime type text/xml, and the following line DOES find them:
view.setMimeTypes("text/xml");
Why doesn't the search by extension work?
For those finding this from Google, the question wasn't as daft as it sounded - there is a (pseudo) mime-type for each extension in the Drive world, but it's not usable in that way, at least not in the Picker.
A workable (ie user-friendly) solution is to use a query on the view:
view.setQuery("*.abc");
For completeness:
gapi.load("picker", { "callback": function () {
if (!picker) {
var view = new google.picker.DocsView(google.picker.ViewId.DOCS);
view.setMimeTypes("text/xml");
view.setMode(google.picker.DocsViewMode.LIST);
view.setQuery("*.abc");
picker = new google.picker.PickerBuilder();
picker.setTitle(TEXT.PICKER_PROMPT);
picker.setAppId(CONST.APP_ID);
picker.addView(view);
picker.setOAuthToken(session.OAuthToken.access_token);
picker.setCallback(pickerCallback);
picker.setInitialView(view);
};
picker.build().setVisible(true);
));
Adding on to HeyHeyJC's answer, you can use a double pipe (||) separating each file extension, if you want to use multiple.
For example, view.setQuery("*.abc || *.def || *.ghi");

Export d3-generated HTML table as CSV (must work on IE too)

From JavaScript, I make an ajax/xhr d3.csv() call which triggers a lengthy MySQL query (which can sometimes take more than 30 seconds to run). An HTML table is then generated (via d3.js) from the data.
I want the user to be able to download the data as a CSV file via a button click, but
I don't want to create a tmp file on the server for this
Running the query again on the server is not an option -- I don't want to make the user wait another 30 seconds (nor tie up the database again)
I want to specify the filename, e.g., descriptiveName-some_datetime_here.csv
It needs to work in IE (corporate America thing) and Safari (corporate Executive thing)
Converting the JSON data that d3 created into CSV is not an issue (I know how to do that part).
There are many similar SO questions, and the general consensus seems to be: use a data URI and specify the filename in a download attribute (Q1, Q2, etc.).
But that attribute is sadly not supported on IE or Safari.
Maybe there is a better way, but here's one way to do it: submit a form with the desired filename and the data as two hidden form elements. Have the server simply return the data with the appropriate headers set for a file download. No need for tmp files; works on all browsers.
HTML:
<form id="download-form" method="post">
<input type="button" value="download CSV">
</form>
<!-- the button is right above the HTML table -->
<table>... </table>
JavaScript/D3:
var jsonData;
var filenameDateFormat = d3.time.format("%Y%m%d-%H%M%S");
// ... after loading the data, and setting jsonData to the data returned from d3.csv()
jsonData = data;
// display the form/button, which is initially hidden
d3.select("#download-form").style("display", "block");
d3.select("#download-form input[type=button]").on('click', function() {
var downloadForm = d3.select("#download-form");
// remove any existing hidden fields, because maybe the data changed
downloadForm.selectAll("input[type=hidden]").remove();
downloadForm
.each(function() {
d3.select(this).append("input")
.attr({ type: "hidden",
name: "filename",
value: CHART_NAME + "-"
+ filenameDateFormat(new Date()) + ".csv"});
d3.select(this).append("input")
.attr({ type: "hidden",
name: "data",
value: convertToCsv(jsonData) });
});
document.getElementById("download-form").submit();
});
function convertToCsv(data) {
var csvArray = ['field_name1_here,field_name2_here,...'];
data.forEach(function(d) {
csvArray.push(d.field_name1_here + ',' + d.field_name2_here + ...);
});
return csvArray.join("\n");
}
Server (Python, using Bottle):
#app.route('/download', method='POST')
def download():
if request.environ.get('HTTP_USER_AGENT').find('Chrome'):
# don't add the Content-Type, as this causes Chrome to output the following
# to the console:
# Resource interpreted as Document but transferred with MIME type text/csv
pass
else:
response.set_header('Content-Type', 'text/csv')
response.set_header('Content-Disposition',
'attachment; filename="' + request.forms.filename + '"')
return request.forms.data
Not pretty, but it works.