Dynamically changing Report's Shared Data Source at Runtime - reporting-services

I'm looking to use SSRS for multi-tenant reporting and I'd like the ability to have runtime-chosen Shared Data Sources for my reports. What do I mean by this? Well, I could be flexible but I think the two most likely possibilities are (however, I'm also open to other possibilities):
The Shared Data Source is dictated by the client's authentication. In my case, the "client" is a .NET application and not the user, so if this is a viable path then I'd like to somehow have the MainDB (that's what I'm calling it) Shared Data Source selected by the Service Account that the client logs in as.
Pass the name of the Shared Data Source as a parameter and let that dictate which one to use. Given that all of my clients are "trusted players", I am comfortable with this approach. While each client will have its own representative Service Account, it's just for good measure and should not be important. So instead of just calling the data source MainDB, we could instead have Client1DB and Client2DB, etc. It's okay if a new data source means a new deployment but I need this to scale easily enough as well to ~50 different data sources over time.
Why? Because we have multiple/duplicate copies of our production application for multiple customers but we don't want to duplicate everything, just the web apps and databases. We're fine with some common "back-end" things. And for SSRS, because of how expensive licenses are (and how rarely reports are ran by our users), we really want to have just a single back-end for all of our customers (I actually have a second one on standby for manual disaster recovery situations - we don't need to be too fancy here as reports are the least important DR concern we have).
I have seen this question which points to this post but I was really hoping there was a better way than this. Because of all of those additional steps/efforts/limitations/etc, I'd rather just use PowerShell to script duplicate deployments of the reports with tweaked hardcoded data sources instead of standardizing on the steps in that post. That solution feels WAY too hacky to me and doesn't seem to scale very well at all.

I've done this a bunch of terrible ways (usually hardcoded in a dynamic script), and then I discovered its actually quite simple.
Instead of using Shared Connection, use the Embedded Connection and create your Connection string based on params (or any string manipulation code)....

Related

Interconnect multiple databases on the same MySQL server

I am looking for a solution that lets me interconnect several databases.
But let me explain it with the exact example:
I have a main domain (front page for public clients) and four sub-domains (development, management, client, ...) in the clients webhosting.
Each domain has its own database and runs different software (WordPress, dolibarr, sysPass, our own software), but all databases are stored on the same mySQL server.
If a CRUD is made, I want that the other databases also "do" something with that data.
Basically, automation.
For example - a user on development.subdomain.xyz sets a project task to "finished".
When the UPDATE is done to the "development" database, I want an INSERT with parts of that data into the "management" database and an UPDATE towards the "client" database.
I could write up some script that connects to all four databases and does the operations necessary.
But that feels a little hard to maintain if multiple users shall have access to this "logic" system?
I could also use the provided API's and process the data (again in a script form rather than implementing a whole UI).
That feels like adding an unnecessary, extra security concern and again hard to maintain?
If I want to add additional functionality - like sending an Email as well, that would even make it harder for non-coders to interact.
So I found several of these "Low-Code Business Process Management" tools and now I'm at a loss.
Is that what I'm looking for? Can you throw me some tags, keywords or links to guide my search for possible solutions?
I do not even know how to call such a system or search for it - which stops me from progressing.
Thank you for all tips :)

html page with xlsm file as backend

I was wondering whether is that possible to have xlsm file as backend while having html as frontend? How can I achieve this if yes?
Thanks in advance.
Since the question lacks the understanding of an application structure in the programming realm, I will put this as an answer hoping to clarify a few things.
First of all I don't think you understand what the term "back-end" means.
Please read https://en.wikipedia.org/wiki/Front_and_back_ends AND http://blog.teamtreehouse.com/i-dont-speak-your-language-frontend-vs-backend
hopefully these will clarify a few thing for you.
Just to explain these concepts shortly:
In an application Front-end and back-end refer to two interfaces that communicate with each other and exchange data in some form. Such separation is made when the program and the user are separate (such as when you have a server and a client such as in distributed programming). This however, is only one of many programming patterns today. Although rare in today's world, there are programs that do not separate functionality in such way and thus delegate all this functionality to the core program that is statically installed on the clients computer. But in other cases here is what the terms front-end vs. Back-end means:
Reason why such separation is necessary:
In today's world many applications (such as web applications and mobile applications) are deployed on common servers to provide wider and faster access, better support and to reduce the cost of access for the client (not requiring any space, no download time etc.). However in such cases, since the client doesn't have access to the program locally, they need to access it over internet protocols such as TCP (which is used by today's http). The problem is that the frontend files are served everytime the application is loaded and can not keep track of states of data (they are stateless) [excluding the edge cases of cookies and caches]
Front End:
The sole reason that the front end exists for the user to interact with the application and to collect data from the user such as login information etc. (User Interface)
Back end:
Now back-end is a little bit more complicated. There are 2 major components to a good back-end design:
Logic
Data
The backend is responsible for processing the data from the user (front-end) in a correct and meaningful way. For example in a really simple program which adds two numbers the front end would be responsible of asking the user for two numbers and the back-end would carry out the actual addition and send the result back to front end to be displayed.
If the data has states. The backend would also need to save the last state of the data somewhere on the server. This is where the second component comes in. The most common practice is to have a ".db" file(s) which represents a database. However there is no obligation for you to do so. When necessary, if you wanted your backend could read data any where from a plain text file to STDIN.
Why do we use databases? ==> The queries. Query languages that come with data bases make it so much easier for us to extract and isolate the relevant data
After processing and modifying the data, the backend sends it back to the front end to be displayed to the user. The common data transferring ways are JSON, XML and SExpressions.
So following this short lecture, back to your question:
Can I have an xlsm file in the backend?
Yes. You can preserve the data in the backend(server) in anyway that you want. The only thing you need to make sure is that the endpoint the front end communicates to reads data from this file and writes back onto this file. (Sometimes CSV files are used in such a way that is similar to xlsm files)
Is it a good idea?
No. Databases exists for a reason. Use them.
Hope this sheds light on a few things. I highly advise you understand the application stack before writing any code

Ruby on Rails - Database or excel

I am currently doing a project in Ruby on Rails and I have been presented with a dilemma.
The dilemma is that the users of my system will be uploading an excel spreadsheet. The issue is should I just read straight from this excel spreadsheet into my front-end or should I load this spreadsheet into my MySQL database and then to my front-end.
I have asked numerous people about this issue and have researched on-line to no avail.
Any help would be much appreciated.
The Excel file is not a database. If you need to allow it as source input, parse it, copy the data into a real database and connect to it.
The database is more flexible and efficient for querying and processing information.
I can think of two benefits, or rather options, of having them upload the excel spreadsheet for processing by your back end.
1) would be for your tracking purposes (who sent what and here is what the back-end did with it...). In fact consider that other formats/versions could be introduced, would it be important to keep them to identify what went wrong? "How can we handle this new format"?
2) On the other side, the front-end way that is, you offload processing from the back-end, but that means that the browser app could get fairly complex and depending on your excel, that is if it has many relationships, sending that data up to the server could be complex. However if is simply a flat spreadsheet, say simple rows without totals/tax calc/..., then it might be an advantage of loading it into the browser and then sending these rows up to the server if offloading processing is of any importance.
However point 2 really is diluted by point 1, which to me would be of greater importance for future migration of this service. So I personally would choose uploading it and processing on the back end.
Update
As you clarified in the comments, if you are asking about the use of Excel on the backend as a database? I would agree with Simone Carletti's answer here. Maybe just add a real database gives you much more flexibility, more tools and, more performance. This difference is loading a file, parsing it into some structure, then saving it (unless you are using some .NET framework and even if, the Database (MySQL, MongoDB...) would give you much more flexibility in structuring and querying, over the headache of managing with the speed of DB connections. You might just want to write a sample in both to evaluate, the DB solution will probably win you over.

Preemptively getting pages with HTML5 offline manifest or just their data

Background
I have a (glorified) CRUD application that I'd like to enable HTML5 offline support with. The cache-manifest system looks simple yet powerful, but I'm curious about how I can allow users to access data while offline.
For example, suppose I have these pages for the entity "Case" (i.e. this is CRM case-management software):
http://myapplication.com/Case
http://myapplication.com/Case/{id}
http://myapplication.com/Case/Create
The first URI contains a paged listing of all cases, using the querystring parameters pageIndex and pageSize, e.g. /Case?pageIndex=2&pageSize=20.
The second URI is the template for editing individual cases, e.g. /Case/1 or /Case/56.
Finally, /Case/Create is the form used to create cases.
The Problem
I would like all three to be available offline.
/Case
The simple way would be to add /Case to the cache-manifest, however that would break paging (as the links wouldn't work).
I think I could instead add something like /Case/AllData which is an XML resource, which is cached and if offline then a script on /Case would use this XML data to populate the list and provide for pagination.
If I go for the latter, how can I have this XML data stored in the in-browser SQL database instead of as a cached resource? I think using the SQL database would be more resilient.
/Case/{id}
This is more complicated. There is the simple solution of manually adding /Case/1, /Case/2, /Case/3 etc... to /Case/1234, but there can be hundreds or even thousands of cases so this isn't very practical.
I think the system should provide access to the 30 most recent cases, for example. As above, how can I store this data in the database?
Also, how would this work? If I don't explicitly add /Case/34 to the manifest and the user clicks on to /Case/34 how can I get the browser to load a page that my JavaScript will populate based on the browser's SQL database data and not display the offline message?
/Case/Create
This one is more simple - as it's just an empty page and on the <form>'s submit action my script would detect if it's offline, and if it is offline then it would add it to the browser's SQL database. Does this sound okay?
Thanks!
I think you need to be looking at a LocalStorage database (though it does have some downsides), but there are other alternatives such as WebSQL and IndexedDB.
Also I don't think you should be using numeric Id's if you are allowing people to create as you will get Primary Key conflicts, it is probably best to use something like a GUID.
Another thing you need is the ability to push those new cases onto the server. there could be multiple...
Can they be edited? If they can I think you really need to be thinking about synchronization and conflict resolution hard very hard if that is the case.
Shameless self promotion, I have a project that is designed to handle these very issues, though it's not done, it's close. You can see it (with an ugly but very functional) demo at https://github.com/forbesmyester/SyncIt

Configuration Information in DB?

I have lots of stuff in an app.config, and when changes are necessary, an app restart is required. Bad for my 24x7 web server system (it really is 24x7, not even 23x7). I would like to use a good strategy for keeping the config information in a DB table and query/use it as needed. I googled around a bit and am coming up dry. Does anyone have any suggestions before I re-invent the wheel?
Thanks.
I needed exactly this for my recent application, and couldn't use any application server specific techniques as I needed some console apps run on cronjobs to access them too.
I basically made a couple of small tables to create a registry-style configuration database. I have a table of keys (which all have parent-keys so they can be arranged in a tree structure) and a table of values which are attached to keys. All keys and values are named, so my access functions look like this:
openKey("/my_app");
createKey("basic_settings");
openKey("basic_settings");
createValue("log_directory","c:\logs");
getValue("/my_app/basic_settings","log_directory");
The tree structure allows you to logically separate similar data (e.g. you can have a "log_directory" value under several different keys) and avoids having the overly verbose names you find in properties files.
All the values are just strings (varchar2 in the db), so there's some overhead in converting booleans and numbers: but it's only config data, so who cares?
I also create a "settings_changed" value that has a datetime string in it: so any app can quickly tell if it needs to refresh it's configuration (you obviously need to remember to set it when you change anything though).
There may be tools out there to do this kind of thing already: but this was only a days worth of coding and works a treat. I added command line tools to edit and upload/download parts or all of the tree, then made a quick graphical editor in Java Swing.