I am trying to grab two columns of data out of a database, using Ruby on Rails ActiveRecord calls and put them into a 2D JSON array for passing to the client.
I have it working for one column. Now I need to get it working for 2 columns.
This is what I have so far for the database call:
select("TOTAL").map{|x| x.TOTAL.ceil}
This is what I have for the controller:
#results = JSON.dump({ :totals => PerformanceResults.find_totals })
This gives me something like this:
{"totals" [145,132,863,693,372,74,838,91,18,172,84,90,373,161,160,173,1910,210,513,14,79,21,84,41,2630,0,93,150,2971]}
To get two columns, this is how I'm starting out, but it's not going well:
Database call:
select("TOTAL, time_stamp ").map{|x| x.attributes.slice(:x.TOTAL.ceil, x.time_stamp)}
Its telling me "undefined method `TOTAL' for :x:Symbol", which I understand, but since I'm new to Ruby on Rails and also JSON, I thought I'd ask for some help in doing this...
My goal is to get this passed to the client: {"totals" [['timestamp', data], ['timestamp', data], etc.... ]}
I have solved this on my own using the following for anyone looking for this solution in the future.
select("TOTAL, time_stamp ").map{|x| [x.TOTAL.ceil, x.time_stamp]}
In rails console, to fetch multiple columns, you could also use the following method. Suppose you have a User table and you wish to print the id's and email's of the users, You have to do it as shown below:
User.all.map{|user| "#{user.id},#{user.email}"}
This is an alternative to what was already explained above.
Related
Setup:
I'm using Ruby on Rails with ActiveRecord and MySQL.
I have a Coupon model.
It has an attribute called query, it is a string which could be run with a where.
For example:
#coupon.query
=> "'http://localhost:3003/hats' = :url OR 'http://localhost:3003/shoes' = :url"`
If I were to run this query it would either pass or fail based on the :url value I pass in.
# passes
Coupon.where(#coupon.query, url: 'http://localhost:3003/hats')
Coupon.where(#coupon.query, url: 'http://localhost:3003/shoes')
# fails
Coupon.where(#coupon.query, url: 'http://localhost:3003/some_other_url')
This query varies between Coupon models, but it will always be compared to the current url.
I need a way to say: Given an ActiveRecord collection #coupons only keep coupons with queries that pass.
The structure of the where is always the same, but the query changes.
Is there any way to do this without a loop? I could potentially have a lot of coupons and I am hoping to do this an ActiveRecord scope. Something like this?
#coupons.where(self.query, url: #url)
Perhaps I need to write a user defined function in my database?
Using multiple variables in a query is easy, but where the thing you are comparing your variable to is also a variable - that has me stumped. Any suggestions very appreciated.
I would agree with Les Nightingill's comment that this looks like something that should probably be solved at a more architectural level. I'd imagine an easy refactoring to extract a new CouponQuery model that's a 1:n table containing multiple entries for a coupon_id for each query url that should pass. Then you could use a simple join like
Coupon.joins(:coupon_query).where(coupon_queries: { url: my_url })
If adding a new table is not an option, and if you're running on a newer MySQL version (>= 5.7), you could consider transforming the query column (or adding a new json_query column) into a MySQL JSON field and using the new JSON_CONTAINS query.
If from the user-side they should be able to manage the queries as a plain text field, you could use a before_save hook on your model to translate this into the separate table structure or JSON format respectively.
But if neither is an option for you and you really need to stick with the query column that stores a plain string, then you could use a LIKE query to match the sub-string 'your-url' = :url:
Coupon.where('url LIKE "%? = :url%"', my_url)
which, if you e.g. pass 'http://localhost:3003/hats' as my_url would return something like this SQL query:
SELECT `coupons`.* FROM `coupons`
WHERE (url LIKE "%'http://localhost:3003/hats' = :url%")
I'm developing an API using NestJS & TypeORM to fetch data from a MySQL DB. Currently I'm trying to get all the instances of an entity (HearingTonalTestPage) and all the related entities (e.g. Frequency). I can get it using createQueryBuilder:
const queryBuilder = await this.hearingTonalTestPageRepo
.createQueryBuilder('hearing_tonal_test_page')
.innerJoinAndSelect('hearing_tonal_test_page.hearingTest', 'hearingTest')
.innerJoinAndSelect('hearingTest.page', 'page')
.innerJoinAndSelect('hearing_tonal_test_page.frequencies', 'frequencies')
.innerJoinAndSelect('frequencies.frequency', 'frequency')
.where(whereConditions)
.orderBy(`page.${orderBy}`, StringToSortType(pageFilterDto.ascending));
The problem here is that this will produce a SQL query (screenshot below) which will output a line per each related entity (Frequency), when I want to output a line per each HearingTonalTestPage (in the screenshot example, 3 rows instead of 12) without losing its relations data. Reading the docs, apparently this can be easily achieved using the relations option with .find(). With QueryBuilder I see some relation methods, but from I've read, under the hood it will produce JOINs, which of course I want to avoid.
So the 1 million $ question here is: is it possible with CreateQueryBuilder to load the relations after querying the main entities (something similar to .find({ relations: { } }) )? If yes, how can I achieve it?
I am not an expert, but I had a similar case and using:
const qb = this.createQueryBuilder("product");
// apply relations
FindOptionsUtils.applyRelationsRecursively(qb, ["createdBy", "updatedBy"], qb.alias, this.metadata, "");
return qb
.orderBy("product.id", "DESC")
.limit(1)
.getOne();
it worked for me, all relations are correctly loaded.
ref: https://github.com/typeorm/typeorm/blob/master/src/find-options/FindOptionsUtils.ts
You say that you want to avoid JOINs, and are seeking an analogue of find({relations: {}}), but, as the documentation says, find({relations: {}}) uses under the hood, expectedly, LEFT JOINs. So when we talk about query with relations, it can't be without JOIN's.
Now about the problem:
The problem here is that this will produce a SQL query (screenshot
below) which will output a line per each related entity (Frequency),
when I want to output a line per each HearingTonalTestPage
Your query looks fine. And the result of the query, also, is ok. I think that you expected to have as a result of the query something similar to json structure(when the relation field contains all the information inside itself instead of creating new rows and spread all its values on several rows). But that is how the SQL works. By the way, getMany() method should return 3 HearingTonalTestPage objects, not 12, so what the SQL query returns should not worry you.
The main question:
is it possible with CreateQueryBuilder to load the relations after
querying the main entities
I did't get what do you mean by saying "after querying the main entities". Can you provide more context?
y'all,
Within my custom, strapi content type, controller code, what method in the model object do I use to create a new record? My app is configured to use MySql.
The following worked fine when I was using MongoDB, but now with MySql, it doesn't work.
With Mongo, in my code, I was doing this:
let model = strapi.models[modelName];
await model.create({"Name":"<NEW ENTRY>", "Path":ruleData.requestedPath});
, but now, with MySql, I get an error saying that model.create() is not a function. 🤔
Also, when I step into the code, create() is no longer there. I also can't seem to find the equivalent "create" method in the model object, for mysql.
??? Does the strapi ORM, model object change member functions, etc when moving from MongoDB to MySql??? I thought not since that was a big part of the reason for using the ORM.
I suggest you use strapi.query('article') instead of strapi.models.article
So it will be strapi.query('article').create({...})
I have a problem, I had this table (Pkb) :
It's basically a date (but in varchar format, YYYYMMDD), I want to select all the date before today using eloquent laravel 5
$todays = Carbon::today();
$filterdate = Pkb::whereDate('rtglb', '<=', $today)->get();
It works on result if used in controller but not with php artisan tinker (so basically it's not working and thrown an error) :
When I put it in handle function in command console, It's not working too because it's rtglb are in varchar table. (I need to put it in command console to create scheduller)
I have access to database, but I can't access the php code cause this table actually old module and I had to create new module that takes data from this table, calculate it using scheduller..
is there anyway to convert the content of rtglb to date before select it? using eloquent or raw mysql? or are there any solution for me?
Hi there i have some sql tables and i want to convert these in a "Drupal Node Format" but i don't know how to do it. Does someone knows at least which tables i have to write in order to have a full node with all the keys etc. ?
I will give an example :
I have theses Objects :
Anime
field animeID
field animeName
Producer
field producerID
field producerName
AnimeProducers
field animeID
field producerID
I have used the CCK module and i had created in my drupal a new Content Type Anime and a new Data Type Producer that exist in an Anime object.
How can i insert all the data from my simple mysql db into drupal ?
Sorry for the long post , i would like to give you the chance to understand my problem
Thx in advance for your time to read my post
You can use either the Feeds module to import flat CSV files, or there is a module called Migrate that seems promising (albiet pretty intense). Both work on Drupal 6 or 7.
mmmmm.... i think you can export CVS from your sql database and then use
http://drupal.org/project/node_import
to import this cvs data to nodes.....mmmm i don know if there is another non-programmatically way
The main tables for node property data are node and node_revision, have a look at the columns in those and it should be fairly obvious what needs to go in those.
As far as fields go, their storage is predictable so you would be able automate an import (although I don't envy you having to write that!). If your field is called 'field_anime' it's data will live in two tables: field_data_field_anime and field_revision_field_anime which are keyed by the entity ID (in this case node ID), entity type (in the case 'node' itself) and bundle (in this case the name of your node type). You should keep both tables up to date to ensure the revision system functions correctly.
The simplest way to do it though is with PHP and the node API functions:
/* This is for a single node, obviously you'd want to loop through your custom SQL data here */
$node = new stdClass;
$node->type = 'my_type';
$node->title = 'Title';
node_object_prepare($node);
// Fields
$node->field_anime[LANGUAGE_NONE] = array(0 => array('value' => $value_for_field));
$node->field_producer[LANGUAGE_NONE] = array(0 => array('value' => $value_for_field));
// And so on...
// Finally save the node
node_save($node);
If you use this method Drupal will handle a lot of the messy stuff for you (for example updating the taxonomy_index table automatically when adding a taxonomy term field to a node)