I'm currently using the BO BI4 REST API to retrieve metadata from a universe but all I was able to do was to get the list of the objects with their ID, path and type.
I would like to know if it is possible to get the SQL called when I use these objects with the API.
Thank you in advance.
It's not as easy as it should be.
The WebI REST API doesn't expose objects' SQL.
You can get summary information about universes, including their objects but not SQL, by using the Semantic Layer REST API.
To get all of universe objects' info, including SQL, you will need to use another SDK. For unx universes, use the Semantic Layer SDK. For unv universes, the only option is the mostly-undocumented Designer COM SDK.
Related
I have an optimization algorithm deployed as a live deployment. It takes a set of objects and returns a set of objects with potentially different size. This works just fine when I'm using the REST api.
The problem is, I want to let the user of the workshop app query the model with a set of objects. The returned objects need to be written back to the ontology.
I looked into an action backed function but it seems like I can't query the model from a function?!
I looked into webhooks but it seems to not fit the purpose and I also would need to handle the API key and can't write back to the ontology?!
I know how to query the model with scenarios but it is per sample and that does not fit the purpose, plus I cant write back to the ontology.
My questions:
Is there any way to call the model from a function and write the return back to the ontology?
Is there any way to call a model from workshop with a set of objects and write back to the ontology?
Is modeling objectives just the wrong place for this usecase?
Do I need to implement the optimization in Functions itself?
I answered the questions, as well I tried to address some of the earlier points.
Q: "I looked into an action backed function but it seems like I can't query the model from a function?!"
A: That is correct, at this time you can't query a model from a function. However there are javascript based linear optimization libraries which can be used in a function.
Q: "I looked into webhooks but it seems to not fit the purpose and I also would need to handle the API key and can't write back to the ontology?!"
A: Webhooks are for hitting resources on networks where a magritte agent are installed. So if you have like a flask app on your corporate network you could hit that app to conduct the optimization. Then set the webhook as "writeback" on an action and use the webhook outputs as inputs for a ontology edit function.
Q: "I know how to query the model with scenarios but it is per sample and that does not fit the purpose, plus I cant write back to the ontology."
A: When querying a model via workshop you can pass in a single object as well as any objects linked in a 1:1 relationship with that object. This linking is defined in the modeling objective modeling api. You are correct to understand you can't pass in an arbitrary collection of objects. You can write back to the ontology however, you have to set up an action to apply the scenario back to the ontology (https://www.palantir.com/docs/foundry/workshop/scenarios-apply/).
Q: "Is there any way to call the model from a function and write the return back to the ontology?"
A: Not from an ontology edit function
Q: "Is there any way to call a model from workshop with a set of objects and write back to the ontology?"
A: Only object sets where the objects have 1:1 links within the ontology. You can writeback by appyling the scenario (https://www.palantir.com/docs/foundry/workshop/scenarios-apply/).
Q: "Is modeling objectives just the wrong place for this usecase? Do I need to implement the optimization in Functions itself?"
A: If you can write the optimization in an ontology edit function it will be quite a bit more straightforward. The main limitation of this is you have to use Typescript which is not as commonly used for this kind of thing as Python. There are some basic linear optimization libraries available for JS/TS.
We are now in process of evaluating integration solutions and comparing Mule and Boomi.
Use case is to read an Excel file, map the columns to a pre-defined set of JSON attributes and then use the JSON to insert records into a database. The mapping may vary from one Excel template to another wherein the column names in an Excel may be different from others.
How do I inject mapping information (source vs target) from outside integration flow?
Note: In Mule, I'm able to do that using a mapping variable (value is JSON) that I inject using Mule DataWeave language.
Boomi's mapping component is static in terms of structure but more versatile solutions are certainly possible.
The data processor component opens up Groovy, JavaScript, and XSLT 3.0 as options. These are Turing-complete languages that can be used to bend Boomi to almost any outcome.
You could make the Boomi UI available to those who need to write the maps in JSON. It's a pretty simple interface to learn. By using a route component, there could be one "parent" process that governs the a process for each template/process and then a map for each template. Such a solution would be pretty easy to build and run; allowing the template-specific processes to be deployed independently of the "parent".
You could map to a generic columnar structure and then dynamically alter the target
columns by writing a SQL procedure that would alter the target columns.
I've come across attempts to do what you're describing (not using either Boomi or Mulesoft) which were tragic failures: https://www.zdnet.com/article/uk-rural-payments-agency-rpa-it-failure-and-gross-incompetence-screws-farmers/ I draw your attention to the NAO's points:
ensure the system specifications retain a realistic level of flexibility
and
bespoke software is costly to develop, needs to be thoroughly tested, and takes more time to implement
The general goal for such a requirement like yours is usually to make transformation/ETL available to "non-programmers" which denies the reality that there are many more skills to delivering an outcome than "programming".
I am trying to move my informatica pipelines in PC 10.1 to Azure Data Factory/ Synapse pipelines. Other than rewriting them from scratch, is there a way to migrate them somehow.. I am not finding any tools to achieve this as well. Has anyone faced this problem. Any leads on how to proceed ahead.
Thanks
There are no out of box solutions available to complete this migration. Unfortunately, you will have to author them again.
Informatica PowerCenter pipelines are a physical implementation of an Extract Transform Load (ETL) process. Each provider has different approaches to the implementations and they do not necessarily map well from one to another. Core Azure Data Factory (ADF) is actually more suited to Extract, Load and Transform (ELT), unless of course you use Data Flows.
So what you have to do is:
map out physically what your current pipeline is doing, if you don't have that documentation already. A simple spreadsheet template mapping out the components of the existing pipeline, tracking source, target plus any transformations will suffice
logically map out what the pipeline is doing; ie without using PowerCenter- specific terminology lay out what the "as is" pipeline is doing. A data flow diagram is a great way to do this
logically map out what the "to be" pipeline should do; ie without using any ADF-specific terminology, attempt to refine the "as is" pipeline to its simplest form
using expert knowledge of the ADF components (eg Copy, Lookup, Notebook, Stored Proc to name but a few) map from the logical "to be" to the physical (in the loosest sense of the word, it's all cloud now right : ), eg move data from place to place with the Copy activity, transform data in a SQL database using the Stored Proc activity, a repeated activity might use a For Each loop (bear in mind these execute in parallel), do sophisticated transformations or processing using Databricks notebooks if required and so on. If you require a low-code approach, consider Data Flows.
So you can see it's just a few simple steps. Good luck!
First off, apologies for the long description of my brainspace below. I'm still wrapping my head around lots of these new ideas, so I'm sure I'm describing something incorrectly. Please feel free to correct me where I'm wrong.
We are in the R&D phase of a new ASP.net MVC2 site and want to ensure that we can 1) decouple our data store from our application, 2) allow for our application to be tested via unit tests and 3) allow us to change out our datastore or use something other than Linq2SQL down the line.
This seemingly simple goal has opened up a whole new world to me that includes the Repository pattern, IoC, DI, and all sorts of other things that are making my head swim. Here's what is so far coming into focus, or at least what I believe is a somewhat correct plan to reach our goals:
We will have a number of ISpecificRepository interfaces that define the contract between users of the interface and the underlying data store.
The SpecificRepository implementations will query specific datastores and return POCO representing our domain objects (or collections of them).
Our Service Layer will perform the application specific business logic using an instance of ISpecificRepository passed to the various service methods and pass these POCO domain objects back to our presentation layer.
As mentioned, we are planning on using Linq2SQL to implement our specific repositories for the application and have decided to decouple our service layer from this implementation by creating the POCO for our domain objects and create a mapping to and from these objects to the LINQ generated entities. In the service layer, we can then create business logic to query the repository, add data, and do whatever else we need to do for each use case. This seems fine but my concern is that since we're using Linq2SQL, our specific Linq repository implementation will now have to house all of the many Get queries that the service layer requires to implement the business logic efficiently.
I'm curious as to whether this somehow breaks the Repository pattern since we're now housing application specific logic not in the service layer but in the repository instead.
The reason I feel that we need to do it this way is so that I can write more efficient Linq queries on my specific Linq repository using various DataLoadOptions, etc. without returning IQueryable from my repository up to my service layer, where it would seem that sort of logic actually belongs. Also, all of the example IRepository interfaces I've seen seem very lightweight and only provide a few methods to GetByID, GetAll, Find, Insert, Delete, and SubmitChanges to the underlying data store. In my case, it sounds like my specific repositories will be doing a great deal more than that.
Thanks for reading this far. Any and all help that can clarify my misconceptions would be greatly appreciated.
-Mustafa
our specific Linq repository
implementation will now have to house
all of the many Get queries that the
service layer requires to implement
the business logic efficiently.
I'm curious as to whether this somehow
breaks the Repository pattern
Not at all. A Repository is a collection of domain entities. If I have a Repository of Accounts, it is perfectly reasonable to want Accounts.ThatAreOverdue().
I personally prefer fluent naming. Accounts.ThatAreOverdue() feels better than AccountRepository.GetOverdue() .. but I suppose that is a point of preference.
Also, all of the example IRepository
interfaces I've seen seem very
lightweight and only provide a few
methods to GetByID, GetAll, Find,
Insert, Delete, and SubmitChanges to
the underlying data store.
A Repository interface can be thin. Find is meant to be used with the Specification pattern. Encapsulate the criteria in another object. The implementation of the criteria can be passed Linq2Sql objects from which to query - but it will be more difficult to re-use the criteria classes against in-memory domain objects (versus in database, where Linq2Sql is involved).
Our Service Layer will perform the
application specific business logic
using an instance of
ISpecificRepository passed to the
various service methods and pass these
POCO domain objects back to our
presentation layer.
Are you saying that your logic will all be in Services and the "domain objects" will be bags of properties and bound to in the view?
I don't think I'd recommend that.
If the same object that is used in the application logic is also used in the view, then you have tightly coupled the two application layers and experience says that causes problems. It will be very difficult to maintain coherence in the Services and Domain through changes if the View uses the same objects. The View will need pieces of data and they will inevitably get stuck onto places they don't really belong in the domain.
I was studying the Oxite project on Codeplex. It has repository interfaces, and an implementation using LINQ to SQL. The LINQ to SQL results are projected to POCO objects in the repository implementations. It looks something like:
public IQueryable<Post> GetPosts()
{
return projectPosts(excludeNotYetPublished(getPostsQuery(siteID)));
}
This is an interesting pattern, so I wondered if it has a specific name.
Thanks!
Data Mapper. See it mentioned here http://www.martinfowler.com/eaaCatalog/repository.html
"In such systems it can be worthwhile to build another layer of abstraction over the mapping layer where query construction code is concentrated".
Note that there are different views on this. I would say those that subscribe to doing that, claim the linq2sql classes are specific to the data access technology, so I guess they see it as an implementation detail of the repository.
Perhaps you mean to ask for a name on the "repository" that returns an IQueryable. I don't think there is a commonly agreed name for that one. Rob Connery used this one on his asp.net mvc storefront series: http://blog.wekeroad.com/mvc-storefront. If you look at the old blog posts on it, you can see calling a repository is actually controversial.
I think this is more of the Data Transfer Object (DTO) pattern, where results are turned into a DTO for transfer across layers. See Data Transfer Object.