How can we find the backing dataset for a table which is being queried by using Foundry Postgres SQL service in slate?
Edit: We do have a way to find sync information from Details Tab in a dataset. I want to reverse enginner this. To find a way to get dataset details using foundry sync table name as i don't know the main dataset.
The metadata related to the mapping from a dataset RID to a postgres table is stored in the postgate.dataset table, however only the administrator account has permissions to query the postgate schema. If you've "lost" the source dataset and can't find it by looking at the possible input datasets in the Dataset preview, you can reach out to your Foundry Support contact to get further help.
The general best practice is to manually add the relevant datasets into the Dataset tab in the Slate app - I normally do this even if I'm primarily using the Platform tab to read from the object layer instead of from Postgres.
The reason this isn't automatically inferred is because Slate doesn't actually introspect your query and you can write queries that dynamically choose which table to query at runtime.
Related
I've created a Scenario in Foundry Workshop and would like to save all of the values set in the Scenario to a dataset for further processing or display.
How do I configure my Scenario so that its values are written back to a backing dataset like this?
There are two ways to accomplish this task:
You can save the Scenario as a an object. See https://www.palantir.com/docs/foundry/workshop/scenarios-save/#saving-scenarios-as-a-user for the relevant documentation.
You can also apply the actions in a Scenario to the Ontology. See https://www.palantir.com/docs/foundry/workshop/scenarios-apply/ for the relevant documentation.
It is not currently possible to write the values from a saved Scenario to a dataset, but if the Scenario is applied the changes will appear in the writeback dataset as normal.
I plan to move data from a number of databases periodically using Azure Data Factory (ADF) and i want to move the data into Azure Parallel Data-Warehouse (APDW). However the 'destination' step in the ADF wizard offers me 2 functions; 1- in the case where data is retrieved from a view you are expected to map the columns to an existing table, and 2- when the data comes from a table you are expected to generate a table object in the APDW.
Realistically this is too expensive to maintain and it is possible to erroneously map source data to a landing zone.
What i would like to achieve is an algorithmic approach using variables to name schemas, customer codes and tables.
After the source data has landed i will be transforming it using our SSIS Integration Runtime. I am wondering also whether a SSIS package could request source data instead of an ADF pipeline.
Are there any resources about connecting to on premises IRs through SSIS objects?
Can the JSON of an ADF be modified to dynamically generate a schema for each data source?
For your question #2, Can the JSON of an ADF be modified to dynamically generate a schema for each data source:
You could put your generate table script in precopyscript.
I am trying to change the server for several dozen reports, all developed by different people, that use a mix of:
Shared Datasources (have a query for this)
Custom Data Sources in RDLs (not perfect but I have a PowerShell script that will download all reports, then I can them search them with Notepad++)
Custom Data Sources that are overridden on the SSRS Server.
Does anybody have a way to find the details of the Custom Data Sources? Shared is easy. I need to get the details of the CUSTOM. Tried going through the data sources table but it's not standard varbinary.
And hey, bonus fake internet points if you have a way to update the custom DS as well; I already have code to change the shared ones.
What’s the best practice for integrating SQL Server with Active Directory (AD)?
NB. I’m using SQL Server 2016
Crux of the issue: I'm using SSRS 2016 and have several reports that need to be filtered based on the user accessing the reports. Originally I created a table of users that would need to access the reports. Then in the report builder I passed the UserID as a parameter within the query so that the resulting dataset would be limited to the data the user needed to see.
The problem this created is that the User table would have to be maintained, and Active Directories are dynamic. Now that I have some time to develop a better option, I’d like to link the LDAP data with SQL Server.
I’m wondering what the best practice for doing this is.
One way I pursued this was through an SSIS package ADO.Net connection. Then convert the data. Then load it into a table. Then schedule a job to run the package however often I needed it. This was problematic because for whatever reason I couldn’t get the data conversion process to work.
The second way I’ve been approaching this is to create a linked server instance for the AD. My research has indicated that I’ll need to create a function that overcomes the string limitation of the xp_sprintf Function. Then leverage temp tables and loop through LDAP data to get around the 1000 record limitation from the AD. I've been able to accomplish all this.
At this point though, there appears to be some other issues.
This ultimately increases the code necessary in the views for my reports which may make it harder for other database users to update if & when the time comes. To the point that I'd need to abandon the views and create stored procedures for the reports to pull from.
This also increases transaction counts beyond the SQL Server to include LDAP every time a user accesses a report.
So to resolve that I could wrap the original query of the LDAP data to create a table and then create a job to run that stored procedure every so often.
Either option solves the problem of maintaining the users table which is good, but it isn't perfect because AD changes can take place at any time.
Which option is better here?
If the SSIS package is the better route, I’m curious as to why that is the better route. I’m not opposed to going back and figuring out what it is I’m missing on the SSIS package to make it work.
Are there additional options I should consider if I want to get the most up-to-date Active Directory listing?
Thanks.
We got an application wherein multi-tenancy is implemented by having a unique database (MYSQL) for each tenant. The table structures are the same. I got a requirement to list all expiring products for each of the tenants, and I was wondering how can I incorporate all those in one data web service in WSO2? I know that I can create a query with the database prefixing the table:
eg. select DB1.products.id, DB1.products.name from DB1.products
Do I need to define a data source for each database (100+ tenants), and can I specify the database name as an input variable in the data service operation? ie. select ?.products.id, ?.products.name from ?.products
Thank you for your help.
Cheers,
Erwin
If your intention is to have a generic data service that would retrieve those tenant specific information from those database dedicated to each tenant, as I see, the most cleanest way to achieve this would be by generifying your SQL queries and making the used datasource dynamically discoverable.
Since you're using 100+ tenants(WOW that's a huge number :)) obviously you might be having 100+ databases created for those tenants too. So, you would need to create a carbon datasource with the same name in each tenant (let's say "testDS") wrapping the tenant specific database configurations such as JDBC URL, credentials, etc. Next, if you've come up with your dataservice configuring the used datasource to be the aforementioned datasource, at runtime, it would correctly pick up the appropriate tenant specific datasource as the datasource feature completely supports multi tenancy. That would prevent you from passing the database name, etc to the SQL query and make your data service configuration more clean, generic thereby making it more maintainable.