SSRS export format customizations and what are the limitations? - reporting-services

I have a 3rd party system a user uses which requires the user manually import new data when the user chooses. I have a view in MS SQL server that has the fields in the exact order that is wanted.
This 3rd party system needs the export file in a comma quote format. For this I want every single field surrounded with quotes and not just the ones that contain the field delimiter (a comma).
I have worked with the configuration files to try and customize how csv is exported. It seems the available options for the CSV renderer does not allow me to get to this format. I think? Am I making this more difficult than I need to? What do I need to do to get to a format like this?
Seeing as this report could be run without any parameters every time I am contemplating setting up a thing with Python, as I could accomplish exactly what I want in a very small number of lines of code. However, it would be nice if I could use SSRS as it takes away my need to figure out the delivery of the export file and is also a simple enough interface any user should be able to figure out how to use it.
Thanks.

MSSQL is a data source, to get data out of. Since you are simply looking for a way to extract data from the database, a python script to create the file exactly as you wish would be the simples explanation. K.I.S.S. :)

Related

How to document report visualizations in Power BI?

I've been using Dax to help me Document my Power BI file. Using Dax queries I've been able to record all the fields that exist in the file, including calculated and measured fields. In my documentation process I am also looking to find a way to record visualizations on the report - namely the charts and graphs. Unfortunately, no Dax query I've read about provides a list of data such as the visualization title, what fields it's using, or what kind of graph it is. Is there any Dax query that provides this information, as a whole or any part of it?
In addition to attempting to document with Dax I have also looked at the raw XML data in the Power BI file (For those who may not know, you can rename your Power BI file from .pbix to .zip and view the raw data). The relevant files within PBI are either XML or JSON. Looking at ../Report/Layout.JSON specifically I have seen JSON-formatted text that includes visualization data. Is there any easy way to extract this data and format it in a more-readable fashion?
For clarity, I do not need the contents of the tables, but I would like a way to record what fields are being used in the visualization, rather than what fields merely exist.
EDIT: I've found a workaround. It isn't efficient, and I would still appreciate any knowledge on this subject
I mentioned going through the the Layout file, renaming it to .JSON and poking it in Notepad++. I've found that you can ctrl+f for "displayName", "queryRef" and ""title\":show\":true,\"text\":\"". Break these all to new lines and indent them with tab (Use ctrl+h and replace with \n\t in notepad). These indent the JSON-formatted lines for Power BI pages, fields called by visualizations, and the visualization titles (if they have any), respectively.
Save this document as .csv and load it into Excel by delimiting on tabs. Use your preferred process - I prefer query editor - editor to remove the other non-indented rows. There still may be a lot of excess characters on the indented lines which need to be removed manually. At the end of this process, though, I ended with 3 columns in excel listing the aforementioned fields I've been looking for.
On a PBIX file with more than a dozen pages and several hundred dependent fields this process took about three hours. If there are any faster ways to do this, I would love to hear about them
As you have noted, DAX doesn't help you in this case because it will tell you about the model rather than the visuals on the report pages. The Layout file works, but you have to parse it for the information you need. You could probably just pull that JSON file into Power BI and process it there to get the info you want. There are also third party tools that can help with this. I just looked at https://app.datavizioner.com/ and it lists the ID of the visual, the type of visual, and each field used in the visual. It is currently free and just requires you to upload a PBIT of your report. It doesn't have the title of the visual that we see, so you would have to find a way to map the IDs you see to the human-friendly title of the visuals if you need that.
See http://radacad.com/power-bi-helper. It can tell you tables and columns in use. It also can export a list of all tables, columns, formulas, and roles in your model.
If you want stuff on the visualizations and how they are configured, Layout.json is the only way I know. The file does open nicely in Power Query if you were so inclined to try to make something of it.
My new Power BI comparer tool documents the whole Power BI file (pbit). The "CompareVisuals"-tab should provide you with all the information necessary.
It also superfast: Just fill in the path to the pbits (you can fill in the same path into both fields, if you don't want to compare, but just to analyze one file).
https://www.thebiccountant.com/2019/09/14/compare-power-bi-files-with-power-bi-comparer-tool/

Modify CSV via Script

I need to modify csv files via automated scripts. I need help in what direction I should look towards and what language the script should be in
Situation: I have a simple CSV file but I need an automated script that can edit certain fields and fill in blank ones with whatever I specify. What should my starting point be and what kind of a developer should I look for? Which coding language should he or she be knowledgable at?
Thank you!!
maybe you are looking for CSVfix, this is a tool for manipulating CSV data in the command shell. Take a look here: https://code.google.com/p/csvfix/
With it you can, among other things:
Reorder, remove, split and merge fields
Convert case, trim leading & trailing spaces
Search for specific content using regular expressions
Filter out duplicate data or data on exclusion lists
Enrich with data from other sources
Add sequence numbers and file source information
Split large CSV files into smaller files based on field contents
Perform arithmetic calculations on individual fields
Validate CSV data against a collection of validation rules
and convert between CSV and fixed format, XML, SQL and DSV
I hope this helps you out,
best regards,
Jürgen Jester

How to make SSIS choose data source depending on parameter?

I have an SSIS data flow task that reads a CSV file with certain fields, tweaks it a little and inserts results into a table. The source file name is a package parameter. All is good and fine there.
Now, I need to process slightly different kind of CSV files with an extra field. This extra field can be safely ignored, so the processing is essentially the same. The only difference is in the column mapping of the data source..
I could, of course, create a copy of the whole package and tweak the data source to match the second file format. However, this "solution" seems like terrible duplication: if there are any changes in the course of processing, I will have to do them twice. I'd rather pass another parameter to the package that would tell it what kind of file to process.
The trouble is, I don't know how to make SSIS read from one data source or another depending on parameter, hence the question.
I would duplicate the Connection Manager (CSV definition) and Data Flow in the SSIS package and tweak them for the new file format. Then I would use the parameter you described to Enable/Disable either Data Flow.
In essence, SSIS doesnt work with variable metadata. If this is going to be a recurring pattern I would deal with it upstream from SSIS, building a VB / C# command-line app to shred the files into SQL tables.
You could make the connection manager push all the data into 1 column. Then use a script transformation component to parse out the data to the output, depending on the number of fields in the row.
You can split the data based on delimiter into say a string array (I googled for help when I needed to do this). With the array you can tell the size of it and thus what type of file it is that has been connected to.
Then, your mapping to the destination can remain the same. No need to duplicate any components either.
I had to do something similar myself once, because although the files I was using were meant to always be the same format - depending on version of the system sending the file, it could change - and thus by handling it in a script transformation this way I was able to handle the minor variations to the file format. If the files are 99% always the same that is ok.. if they were radically different you would be better to use a separate file connection manager.

How to export a csv file based on a SQL query programatically

I am not a SQL expert, so this may be a really obvious thing to those of you who are experts. Anyway, I have created an application that has to generate a csv file based on a SQL Query. In reading the internet tips on how to export a CSV file from SQL they all refer you to using Management Studio, right-click on the query results and use the Save-AS technique. That will not practical for my purposes as the CSV file has to generated at the push of a button within my application. I was hoping (and expected) to be able to do it with a pass through query or stored procedure, but I'm not seeing that SQL 2008 supports this. Any help would be very much appreciated!
The answer will vary depending on the language that your application is written in but to use C# as an example. A common way is to populate a dataset from the sql query and then burn through the dataset using loops to generate the csv.
Here is an example of that approach from the interweb.
http://www.diaryofaninja.com/blog/2009/12/16/c-convert-dataset-to-csv
Here is another example using VB
http://www.vbnettutorial.net/?Id=119&Desc=Export-CSV-from-Dataset
The complexity of the data may require that you get fancy... (for example does your data have double quotes, coma's, binary data???

Use Value in Word calculated with VBA in Access

I've to use a Microsoft Access Database to create different bulk letters in Microsoft Office Word. This works just fine in most cases but it is somehow not possible to use a View (which is defined in Access) in Word as long as the source for data contains a column that is calculated by VBA code in the Access Database. Ah, and I need exactly this calculated value to put it into a Microsoft Office Word Field.
There is unfortunately no way to do this calculation in SQL so I need a solution how I can use those Views as the source in MS Word.
I found just one way yet: Export the View from Access into a Excel Worksheet and use this as the source in Word. As you can think this is very unusable :-(.
(We use Microsoft Office 2003)
Cheers,
Gregor
Try using an Access "Make Table" query. The resulting table will have all the values pre-calculated, and Word won't have a problem reading it.
This is a lot like the suggestion to use a text file, but without the extra mess of making the user generate a text file.
The solution is to use some access word merge code that outputs the query as a text file and then launches the word template and points the template to that intermediate text file.
There are many advantages to the above approach. For one, you don't let word attached to ms-access, so the whole approach is more stable (one application if it freezes up will not affect the other as easy). If you using sql server, or even a workgroup secured access database, it don't matter because the access code is producing that intermediate file. So, even for SQL server, Oracle, MySql etc the word merge will continue to work since we producing a intermediate text file. So, the same merge system works for JET, MySql, Oracle, SQL server, and things will work REGARDLESS of the security settngs on the database.
Also using a intermediate file also means you don't have to resort to some bookmark example which usually means you have to write new code for every merge (that makes no sense!). And, bookmarks are hard to see and insert into the word document.
Another bonus here is word users can continue to use their training courses and teaching and books and on how to setup a word merge document. Another advantage to using merge fields is they allow live preview of the data during editing and composing of the word template document. And, the final resulting merge document does not have any special codes or fields in it.
I have a working sample here that allows you to word enable any form with ONE line of code. This Super easy word merge system then takes over.
http://www.members.shaw.ca/AlbertKallal/msaccess/msaccess.html
Just scroll down in above until you reach the Super Easy Word merge.
The above will allow your VBA expressions in your query also to be used in the word merge.