I am using scala to connect with bucket and insert data in bucket.
case class User(
firstName: String,
lastName: String,
userName: String,
email: String)
bucket.upsert(SerializableDocument.create("usr::" + user.email,user))
I am able to insert and retrieve data from bucket. Now I want to create view/secondary index on firstName field of user.
val ensureIndex = Query.simple("CREATE INDEX firstName ON `user_account`(firstName)");
val queryResult = bucket.query(ensureIndex)
val queryResult = bucket.query(ViewQuery.from("dev_ddl_firstName", "firstName"))
But I am getting 0 as result of queryResult.totalRows().
Can anyone help me a correct way for creating view/secondary index on field in couchbase?
Thanks in advance.
You're mixing two concepts there. The index definition is for a N1QL query, though it does create a view. Typically, if you create the index through a N1QL query, you'll query with N1QL.
The query you're running is on the view created by it. My suspicion is that you need to publish it or use the full_set parameter against the development view. It may be better to stick with a N1QL query.
additionally to what Matt Ingenthron said, SerializableDocument in the Java SDK is storing data with a binary flag. Views can only index content of JSON document, not binary ones...
You can either use the marshaller of your choice to transform your instance to a JSON String and use the RawJsonDocument with it, or let the SDK marshall it with Jackson by doing a conversion of your document to a JsonObject (a SDK simple JSON manipulation class, but more meant for Java) and use the JsonDocument.
Related
Setup:
I'm using Ruby on Rails with ActiveRecord and MySQL.
I have a Coupon model.
It has an attribute called query, it is a string which could be run with a where.
For example:
#coupon.query
=> "'http://localhost:3003/hats' = :url OR 'http://localhost:3003/shoes' = :url"`
If I were to run this query it would either pass or fail based on the :url value I pass in.
# passes
Coupon.where(#coupon.query, url: 'http://localhost:3003/hats')
Coupon.where(#coupon.query, url: 'http://localhost:3003/shoes')
# fails
Coupon.where(#coupon.query, url: 'http://localhost:3003/some_other_url')
This query varies between Coupon models, but it will always be compared to the current url.
I need a way to say: Given an ActiveRecord collection #coupons only keep coupons with queries that pass.
The structure of the where is always the same, but the query changes.
Is there any way to do this without a loop? I could potentially have a lot of coupons and I am hoping to do this an ActiveRecord scope. Something like this?
#coupons.where(self.query, url: #url)
Perhaps I need to write a user defined function in my database?
Using multiple variables in a query is easy, but where the thing you are comparing your variable to is also a variable - that has me stumped. Any suggestions very appreciated.
I would agree with Les Nightingill's comment that this looks like something that should probably be solved at a more architectural level. I'd imagine an easy refactoring to extract a new CouponQuery model that's a 1:n table containing multiple entries for a coupon_id for each query url that should pass. Then you could use a simple join like
Coupon.joins(:coupon_query).where(coupon_queries: { url: my_url })
If adding a new table is not an option, and if you're running on a newer MySQL version (>= 5.7), you could consider transforming the query column (or adding a new json_query column) into a MySQL JSON field and using the new JSON_CONTAINS query.
If from the user-side they should be able to manage the queries as a plain text field, you could use a before_save hook on your model to translate this into the separate table structure or JSON format respectively.
But if neither is an option for you and you really need to stick with the query column that stores a plain string, then you could use a LIKE query to match the sub-string 'your-url' = :url:
Coupon.where('url LIKE "%? = :url%"', my_url)
which, if you e.g. pass 'http://localhost:3003/hats' as my_url would return something like this SQL query:
SELECT `coupons`.* FROM `coupons`
WHERE (url LIKE "%'http://localhost:3003/hats' = :url%")
So I have my location column using Point data type, I'm using Apollo Server and Prisma, and when I use "npx prisma db pull" generates this data type because is not currently supported on Prisma (generated script)
so I say "Ok, I'm using string and I manage how to insert this data type" so I changed to this script, surprise! didn't work enter image description here, try to find any approach to handling MySql Point data type in Prisma but no info at soever, I really appreciate any ideas
You cannot convert it to String and use it as it isn't supported yet. You need to leave it as unsupported and you can only add data via raw queries.
For now, only adding data is supported. You cannot query for it using PrismaClient.
We can query data using Prisma Client, via raw queries as SELECT id, ST_AsText(geom) as geom from training_data where geom has dataType geometry for using Unsupported("geometry").
y'all,
Within my custom, strapi content type, controller code, what method in the model object do I use to create a new record? My app is configured to use MySql.
The following worked fine when I was using MongoDB, but now with MySql, it doesn't work.
With Mongo, in my code, I was doing this:
let model = strapi.models[modelName];
await model.create({"Name":"<NEW ENTRY>", "Path":ruleData.requestedPath});
, but now, with MySql, I get an error saying that model.create() is not a function. 🤔
Also, when I step into the code, create() is no longer there. I also can't seem to find the equivalent "create" method in the model object, for mysql.
??? Does the strapi ORM, model object change member functions, etc when moving from MongoDB to MySql??? I thought not since that was a big part of the reason for using the ORM.
I suggest you use strapi.query('article') instead of strapi.models.article
So it will be strapi.query('article').create({...})
I have a model with a UUID as primary key.
class Books(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
title = .....
And I have a simple query:
results = Books.objects.all()
All is working fine in terms of saving and retrieving data, but part of the record editing process requires storing records in the session variables which means I get the 'UUID('…') is not JSON serializable' error.
It seems to me, that the simplest answer is to convert the UUID objects to strings immediately after making the initial query, thus preventing multiple changes elsewhere. Does that sound logical? If so, I assume I could do it with some sort of list comprehension. Could someone help with the syntax please? Or direct me on the approach if preferred!
Many thanks.
#you have to use
import json
results = Books.objects.all().values('id')
json_str = json.dumps(results)
Hi there i have some sql tables and i want to convert these in a "Drupal Node Format" but i don't know how to do it. Does someone knows at least which tables i have to write in order to have a full node with all the keys etc. ?
I will give an example :
I have theses Objects :
Anime
field animeID
field animeName
Producer
field producerID
field producerName
AnimeProducers
field animeID
field producerID
I have used the CCK module and i had created in my drupal a new Content Type Anime and a new Data Type Producer that exist in an Anime object.
How can i insert all the data from my simple mysql db into drupal ?
Sorry for the long post , i would like to give you the chance to understand my problem
Thx in advance for your time to read my post
You can use either the Feeds module to import flat CSV files, or there is a module called Migrate that seems promising (albiet pretty intense). Both work on Drupal 6 or 7.
mmmmm.... i think you can export CVS from your sql database and then use
http://drupal.org/project/node_import
to import this cvs data to nodes.....mmmm i don know if there is another non-programmatically way
The main tables for node property data are node and node_revision, have a look at the columns in those and it should be fairly obvious what needs to go in those.
As far as fields go, their storage is predictable so you would be able automate an import (although I don't envy you having to write that!). If your field is called 'field_anime' it's data will live in two tables: field_data_field_anime and field_revision_field_anime which are keyed by the entity ID (in this case node ID), entity type (in the case 'node' itself) and bundle (in this case the name of your node type). You should keep both tables up to date to ensure the revision system functions correctly.
The simplest way to do it though is with PHP and the node API functions:
/* This is for a single node, obviously you'd want to loop through your custom SQL data here */
$node = new stdClass;
$node->type = 'my_type';
$node->title = 'Title';
node_object_prepare($node);
// Fields
$node->field_anime[LANGUAGE_NONE] = array(0 => array('value' => $value_for_field));
$node->field_producer[LANGUAGE_NONE] = array(0 => array('value' => $value_for_field));
// And so on...
// Finally save the node
node_save($node);
If you use this method Drupal will handle a lot of the messy stuff for you (for example updating the taxonomy_index table automatically when adding a taxonomy term field to a node)