I'm trying to convert the output of a Model->find query into SQL to be input into a database completely separate from the current MySQL database being used by the cakePHP system. My problem is that I have several virtual fields in the models which are inevitably returned when performing a Model->find on the data. Clearly, I need to find and remove these virtual fields from the find if I am to convert the data into SQL, which will be used as input to an identical database as the original MySQL one. Is there a simple way to omit virtual fields? any way that this can be done in a version higher than 1.3 would also be very helpful.
Many thanks.
You can either only define your virtual fiels at runtime. This is what I usually do.
$this->virtualFields['x'] = 'y';
// find query
But you can also limit the find fields
'fields' => array('all fields without the virtual fields')
This will also skip your virtual fields.
Usually you don't want to verbosely define all fields, though.
You can also unset all the virtual fields for the find() call:
$tmp = $this->virtualFields;
$this->virtualFields = array();
// find query
$this->virtualFields = $tmp;
Related
Setup:
I'm using Ruby on Rails with ActiveRecord and MySQL.
I have a Coupon model.
It has an attribute called query, it is a string which could be run with a where.
For example:
#coupon.query
=> "'http://localhost:3003/hats' = :url OR 'http://localhost:3003/shoes' = :url"`
If I were to run this query it would either pass or fail based on the :url value I pass in.
# passes
Coupon.where(#coupon.query, url: 'http://localhost:3003/hats')
Coupon.where(#coupon.query, url: 'http://localhost:3003/shoes')
# fails
Coupon.where(#coupon.query, url: 'http://localhost:3003/some_other_url')
This query varies between Coupon models, but it will always be compared to the current url.
I need a way to say: Given an ActiveRecord collection #coupons only keep coupons with queries that pass.
The structure of the where is always the same, but the query changes.
Is there any way to do this without a loop? I could potentially have a lot of coupons and I am hoping to do this an ActiveRecord scope. Something like this?
#coupons.where(self.query, url: #url)
Perhaps I need to write a user defined function in my database?
Using multiple variables in a query is easy, but where the thing you are comparing your variable to is also a variable - that has me stumped. Any suggestions very appreciated.
I would agree with Les Nightingill's comment that this looks like something that should probably be solved at a more architectural level. I'd imagine an easy refactoring to extract a new CouponQuery model that's a 1:n table containing multiple entries for a coupon_id for each query url that should pass. Then you could use a simple join like
Coupon.joins(:coupon_query).where(coupon_queries: { url: my_url })
If adding a new table is not an option, and if you're running on a newer MySQL version (>= 5.7), you could consider transforming the query column (or adding a new json_query column) into a MySQL JSON field and using the new JSON_CONTAINS query.
If from the user-side they should be able to manage the queries as a plain text field, you could use a before_save hook on your model to translate this into the separate table structure or JSON format respectively.
But if neither is an option for you and you really need to stick with the query column that stores a plain string, then you could use a LIKE query to match the sub-string 'your-url' = :url:
Coupon.where('url LIKE "%? = :url%"', my_url)
which, if you e.g. pass 'http://localhost:3003/hats' as my_url would return something like this SQL query:
SELECT `coupons`.* FROM `coupons`
WHERE (url LIKE "%'http://localhost:3003/hats' = :url%")
I'm developing an API using NestJS & TypeORM to fetch data from a MySQL DB. Currently I'm trying to get all the instances of an entity (HearingTonalTestPage) and all the related entities (e.g. Frequency). I can get it using createQueryBuilder:
const queryBuilder = await this.hearingTonalTestPageRepo
.createQueryBuilder('hearing_tonal_test_page')
.innerJoinAndSelect('hearing_tonal_test_page.hearingTest', 'hearingTest')
.innerJoinAndSelect('hearingTest.page', 'page')
.innerJoinAndSelect('hearing_tonal_test_page.frequencies', 'frequencies')
.innerJoinAndSelect('frequencies.frequency', 'frequency')
.where(whereConditions)
.orderBy(`page.${orderBy}`, StringToSortType(pageFilterDto.ascending));
The problem here is that this will produce a SQL query (screenshot below) which will output a line per each related entity (Frequency), when I want to output a line per each HearingTonalTestPage (in the screenshot example, 3 rows instead of 12) without losing its relations data. Reading the docs, apparently this can be easily achieved using the relations option with .find(). With QueryBuilder I see some relation methods, but from I've read, under the hood it will produce JOINs, which of course I want to avoid.
So the 1 million $ question here is: is it possible with CreateQueryBuilder to load the relations after querying the main entities (something similar to .find({ relations: { } }) )? If yes, how can I achieve it?
I am not an expert, but I had a similar case and using:
const qb = this.createQueryBuilder("product");
// apply relations
FindOptionsUtils.applyRelationsRecursively(qb, ["createdBy", "updatedBy"], qb.alias, this.metadata, "");
return qb
.orderBy("product.id", "DESC")
.limit(1)
.getOne();
it worked for me, all relations are correctly loaded.
ref: https://github.com/typeorm/typeorm/blob/master/src/find-options/FindOptionsUtils.ts
You say that you want to avoid JOINs, and are seeking an analogue of find({relations: {}}), but, as the documentation says, find({relations: {}}) uses under the hood, expectedly, LEFT JOINs. So when we talk about query with relations, it can't be without JOIN's.
Now about the problem:
The problem here is that this will produce a SQL query (screenshot
below) which will output a line per each related entity (Frequency),
when I want to output a line per each HearingTonalTestPage
Your query looks fine. And the result of the query, also, is ok. I think that you expected to have as a result of the query something similar to json structure(when the relation field contains all the information inside itself instead of creating new rows and spread all its values on several rows). But that is how the SQL works. By the way, getMany() method should return 3 HearingTonalTestPage objects, not 12, so what the SQL query returns should not worry you.
The main question:
is it possible with CreateQueryBuilder to load the relations after
querying the main entities
I did't get what do you mean by saying "after querying the main entities". Can you provide more context?
Is it possible to create a generic query that would work for different types of documents? For example I have "cases" and "factories",
They have different set of fields. e.g:
{
id: 'case_o1',
name: 'Case numero uno',
amount: 40
}
{
id: 'factory_002',
location: 'Venezuela',
workers: 200,
operating: true
}
Is it possible to create a generic query where I would pass the type of an entity (case or factory) and additional parameters and it would filter results based on those?
I could of course use javascript view, but it doesn't allow me to filter by multiple fields. Let's say I want to fetch all factories located in Venezuela, with number of workers between 20 and 55.
I started with this, but then I got stuck:
select * from `mybucket` as entity
where position(meta(entity).id, $entity_type) == 0
How do I pass multiple predicates and have the query to recognize them?
I can of course list fields like this:
where position(meta(entity).id, $entity_type) == 0
and entity.location == 'Venezuela'
and entity.workers > $workers_min
and entity.workers < $workers_max
but then
I'm gonna have to create a separate query for each entity
And even then it won't solve my problem - I have no idea how to ignore predicates, what if next time $workers_min and $workers_max are not passed, does it mean I have to create a query for every single predicate (column)?
For security reasons I cannot generate free-form queries and pass them to Couchbase server, all the queries are already stored in the database, our api just picks them up out of a document and executes them
I think it's possible to create a query that would be "short-circuiting" for args that's undefined (e.g. WHERE $location IS MISSING OR entity.location == $location or something like that)
Is it possible at all to create a query that would be able to effectively filter and order a dataset based on arbitrary parameters? Or there's no way?
#Agzam. Sorry. I were writting my comment when you said it. But anyway. What you are asking for is possible by using coalesces in a not too complex expressions, but it is a REALLY bad idea because this will drastically throw down most of internal database optimizations. Including the use of any existing index. So, except if you are dealing with a relatively small database (and you are sure it will remain being approximately the same size), I suggest you to better try distinct approach… This is, in fact, the reason I implmented sqlapi.
If you need to have all querys previously stored in database, it probably could be much better to sort given arguments by its name and precalculate and store precalculated querys for each possible combination.
You can do it by assigning a default value to the variable when is not used. For instance if $location is not used you can set it to -1 as default value.
Then the where condition would be:
WHERE ($location=-1 OR entity.location = $location)
I have a mysql query that looks like this $query="SELECT * FROM #__content".
What does the #__ at the start of the table name mean?
#__ is a prefix of your tables
#__ is simply the database table prefix and is defined in you configuration.php
If it wasn't defined, people would have to manually have to input their prefixes into every extension that requires access to the database, which you can imagine would be annoying.
So for example, if you database table prefix is j25, then:
#__content = j25_content
As others have said, hash underscore sequence '#_' is the prefix used for table names by Joomla!'s JDatabase class. (N.B. there is only one underscore, the second underscore is maintained to for readability in table names.)
When you first setup Joomla! you are given the option of setting a prefix or using the one randomly generated at the time. You can read about how to check the prefix here.
When you access the database using the JDatabase class it provides you with an abstraction mechanism so that you can interact with the database that Joomla is using without you having to code specifically for MySQL or MSSQL or PostgreSQL etc.
When JDatabase prepares a query prior to executing it, it replaces any occurrences #_ in the from segment of the query with the prefix setup when Joomla! was installed. e.g.
// Get the global DB object.
$db = JFactory::getDBO();
// Create a new query object.
$query = $db->getQuery(true);
// Select some fields
$query->select('*');
// Set the from From segment
$query->from('#__myComponents_Table');
Later when you execute the query JDatabase will change the from segment of the SQL from
from #__myComponents_Table to
from jp25_myComponents_Table — if the prefix is jp25, prior to executing it.
Hi there i have some sql tables and i want to convert these in a "Drupal Node Format" but i don't know how to do it. Does someone knows at least which tables i have to write in order to have a full node with all the keys etc. ?
I will give an example :
I have theses Objects :
Anime
field animeID
field animeName
Producer
field producerID
field producerName
AnimeProducers
field animeID
field producerID
I have used the CCK module and i had created in my drupal a new Content Type Anime and a new Data Type Producer that exist in an Anime object.
How can i insert all the data from my simple mysql db into drupal ?
Sorry for the long post , i would like to give you the chance to understand my problem
Thx in advance for your time to read my post
You can use either the Feeds module to import flat CSV files, or there is a module called Migrate that seems promising (albiet pretty intense). Both work on Drupal 6 or 7.
mmmmm.... i think you can export CVS from your sql database and then use
http://drupal.org/project/node_import
to import this cvs data to nodes.....mmmm i don know if there is another non-programmatically way
The main tables for node property data are node and node_revision, have a look at the columns in those and it should be fairly obvious what needs to go in those.
As far as fields go, their storage is predictable so you would be able automate an import (although I don't envy you having to write that!). If your field is called 'field_anime' it's data will live in two tables: field_data_field_anime and field_revision_field_anime which are keyed by the entity ID (in this case node ID), entity type (in the case 'node' itself) and bundle (in this case the name of your node type). You should keep both tables up to date to ensure the revision system functions correctly.
The simplest way to do it though is with PHP and the node API functions:
/* This is for a single node, obviously you'd want to loop through your custom SQL data here */
$node = new stdClass;
$node->type = 'my_type';
$node->title = 'Title';
node_object_prepare($node);
// Fields
$node->field_anime[LANGUAGE_NONE] = array(0 => array('value' => $value_for_field));
$node->field_producer[LANGUAGE_NONE] = array(0 => array('value' => $value_for_field));
// And so on...
// Finally save the node
node_save($node);
If you use this method Drupal will handle a lot of the messy stuff for you (for example updating the taxonomy_index table automatically when adding a taxonomy term field to a node)