i have a project in which i need to fetch data from different tables which are located in different portlets in a plugin project.
suppose we have two portlet A and portlet B, which have tables A1 and B1 respectively.
i want to fetch data from both portlets.
Can any one help?
I have read about custom sql query ...http://www.liferaysavvy.com/2013/02/getting-data-from-multiple-tables-in.html
but still cant find a proper solution....
A good habit is using the separately portlet and service-portlet (model). Depends on how the project is extensive and what a build tool you are using (ant, maven) I think, that advantage is that you have implementation of the operations to DB visible for anyone plugin (common JAR file in lib directory and portlet-service in webapps) in portlet project.
More about the service-builder >> Service-builder
<<
Related
Is it possible with Yii2, I can run multiple websites using single core code and one server?
Suppose I have 5+ websites , each website has own database, theme is same for all, only single server on backend, but I'm not sure how I can achieve this goal.
This is possible - in fact Advanced Template Project is built like this.
Each website is single app so instead of having frontend, backend and common you can have something like aaa, bbb, ccc, ddd, eee being the names of the websites. Each app can get its own configuration.
There are obvious limits to this implementation:
vendor folder is the same for every app so every app gets the same packages in the same versions no matter if required or not,
server must be able to handle more requests at the same time.
As am Mysql Dba new in Magento FrameWork am unable to find from where the Magento queries are triggering can any tell me the location from where these queries are running
You have not specified which modules data files. Or which query you want to check. Mangento use MVC structure. You will find all data related operations in model. It will be located in app/code/codepool/namespace/module/model
Here code pool will be local or community and third is core but if you want modified file from core copy in local and make changes.
Namespace will be company name and module will be module name.
Hope this will help.
I am continuing the development of an android app that works against a server using maven, springframework and server database is MySQL.
need 2 very specific things:
I want to create a table
I want to create a trigger
But I want to create from springframework and maven and I could not find a way to do it.
Since it is a project among several people I want to be as automatic as possible to keep things simple.
I want to start when the server maven can create the table and the trigger if there are not exist.
It will be done?
I would like a simple example or a site I can visit and give me at least the concept of how.
Maven is a tool which builds your application. It's not available when the application runs. Therefore, it's often not the best choice to create database tables, etc.
A better approach is to locate the place where Spring's ApplicationContext is created and add code there that examines the database, finds out which tables already exist and create those which don't.
Later, you can extend the code to migrate data when your data model changes.
To execute SQL, check the Spring JDBC documentation and especially JdbcTemplate.
This question is going to be a purely organizational question about SSIS project best practice for medium sized imports.
So I have source database which is continuously being enriched with new data. Then I have a staging database in which I sometimes load the data from the source database so I can work on a copy of the source database and migrate the current system. I am actually using a SSIS Visual Studio project to import this data.
My issue is that I realised the actual design of my project is not really optimal and now I would like to move this project to SQL Server so I can schedule the import instead of running manually the Visual Studio project. That means the actual project needs to be cleaned and optimized.
So basically, for each table, the process is simple: truncate table, extract from source and load into destination. And I have about 200 tables. Extractions cannot be parallelized as the source database only accepts one connection at a time. So how would you design such a project?
I read from Microsoft documentation that they recommend to use one Data Flow per package, but managing 200 different package seems quite impossible, especially that I will have to chain for scheduling import. On the other hand a single package with 200 Data Flows seems unamangeable too...
Edit 21/11:
The first apporach I wanted to use when starting this project was to extract my table automatically by iterating on a list of table names. This could have worked out well if my source and destination tables had all the same schema object names, but the source and destination database being from different vendor (BTrieve and Oracle) they also have different naming restrictions. For example BTrieve does not reserve names and allow more than 30 characters names, which Oracle does not. So that is how I ended up manually creating 200 data flows with a semi-automatic column mapping (most were automatic).
When generating the CREATE TABLE query for the destination database, I created a reusable C# library containing the methods to generate the new schema object names, just in case the methodology could automated. If there was any custom tool to generate the package that could use an external .NET library, then this might do the trick.
Have you looked into BIDS Helper's BIML (Business Intelligence Markup Language) as a package generation tool? I've used it to create multiple packages that all follow the same basic truncate-extract-load pattern. If you need slightly more cleverness than what's built into BIML, there's BimlScript, which adds the ability to embed C# code into the processing.
From your problem description, I believe you'd be able to write one BIML file and have that generate two hundred individual packages. You could probably use it to generate one package with two hundred data flow tasks, but I've never tried pushing SSIS that hard.
You can basically create 10 child packages each having 20 data flow tasks and create a master package which triggers these child pkgs.Using parent to child configuration create a single XML file configuration file .Define the precedence constraint for executing the package in serial fashion in master pkg. In this way maintainability will be better compared to having 200 packages or single package with 200 data flow tasks.
Following link may be useful to you.
Single SSIS Package for Staging Process
Hope this helps!
I get this error no matter what version of SubSonic I use. When I query the database for data, it errors out, saying it can not connect to the database.
However, it is able to generate the .cs classes(ActiveRecord, Context, etc) when told to do so.
Any help is appreciated.
Thanks folks...
My guess is that you have your SubSonic generated classes in a separate project from where your main application is (in another project in the same solution). Your main application project references the project containg the SubSonic generated classes.
If this is the case, your main application project must also contain the connection string in a config file, similarly to what your other project has. You might also need to copy over some of the other SubSonic related items from your other project's config file as well.