I have an external system that creates documents (orders) on my platform. The reference to my platform is maintained through a string code, which is not a primary key. So, I have the following entities:
{
docType: "submission",
code: "XPTO28",
name: "test-sub"
}
{
docType: "order",
code: "XPTO28",
value: "100$"
}
Is there any query to associate order documents to submission documents without using primary keys, or must I do so programatically?
You have to do it programatically. Joins in N1QL are currently limited to linking a field (or something that can be reconstructed from a field, eg. concatenating a prefix) to the primary key of the joined keyspace.
Related
I want to create column with type RECORD
I have a STRUCT OR ARRAY(STRUCT)
json
--------
"fruit":[{"apples":"5","oranges":"10"},{"apples":"5","oranges":"4"}]
"fruit":{"apples":"1","oranges":"15"}
"fruit":{"apples":"5","oranges":"1"}
I want to create fruit of RECORD type
fruit RECORD NULLABLE
fruit.apples STRING NULLABLE
fruit.oranges STRING NULLABLE
Using bigquery SQL you can use the following DDL as described in the documentation https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#create_table_statement
CREATE TABLE mydataset.newtable
(
fruit STRUCT<
apples STRING,
oranges STRING
>
)
You can also use BQ auto-detect feature to create table from a JSON file https://cloud.google.com/bigquery/docs/schema-detect#loading_data_using_schema_auto-detection
I believe the most straightforward way to achieve what you want to do is by using an edited version of the json file you have provided (complying to the rules shown in the Public Docs) and loading your data with auto-detection from the Cloud Console.
If you would like to get the following schema:
fruit RECORD NULLABLE
fruit.apples INTEGER NULLABLE
fruit.oranges INTEGER NULLABLE
You should use the following json file:
{"fruit":{"apples":"5","oranges":"10"}}
{"fruit":{"apples":"5","oranges":"4"}}
{"fruit":{"apples":"1","oranges":"15"}}
{"fruit":{"apples":"5","oranges":"1"}}
On the other hand, if you prefer to get a repeated attribute (since there are two fruit objects in the same row of the example you provided), you would need to use the following file:
{"fruit":[{"apples":"5","oranges":"10"},{"apples":"5","oranges":"4"}]}
{"fruit":{"apples":"1","oranges":"15"}}
{"fruit":{"apples":"5","oranges":"1"}}
This will result in the following schema:
fruit RECORD REPEATED
fruit.apples INTEGER NULLABLE
fruit.oranges INTEGER NULLABLE
Finally, I have noticed that you have specified in the question that you would like instead to get the attributes fruit.apples and fruit.oranges as STRING (which is not straightforward for the auto-detection since the values are numbers such as 5 and 10). In this case you could explicitly create the table with a DDL statement, but I strongly suggest considering turning these fields into a integer if that would still suit your use case scenario.
I try to find solution for quick search functionality within PostgreSQL JSONB column. Requirements is that we can search for value in any JSON key.
Table structure:
CREATE TABLE entity (
id bigint NOT NULL,
jtype character varying(64) NOT NULL,
jdata jsonb,
CONSTRAINT entity_pk PRIMARY KEY (id) )
Idea is that we store different type jsons in one table, jtype define json entity type, jdata - json data, for example:
jtype='person',jvalue = '{"personName":"John", "personSurname":"Smith", "company":"ABS Software", "position":"Programmer"}'
jtype='company', jvalue='{"name":"ABS Software", "address":"Somewhere in Alaska"}'
Goal is to make quick search that user can type 'ABS' and find both records - company and person who works in company.
Analog for Oracle DB is function CONTAINS:
SELECT jtype, jvalue FROM entity WHERE CONTAINS (jvalue, 'ABS') > 0;
GIN index only allow for searching key/value pairs
GIN indexes can be used to efficiently search for keys or key/value
pairs occurring within a large number of jsonb documents (datums). Two
GIN "operator classes" are provided, offering different performance
and flexibility trade-offs.
https://www.postgresql.org/docs/current/static/datatype-json.html#JSON-INDEXING
https://github.com/postgrespro/jsquery might be useful for what you are looking for although I haven't used it before.
As of Postgresql 10, you can create indexes on JSON/JSONB columns and then do full text searching within the values for that column as such:
libdata=# SELECT bookdata -> 'title'
FROM bookdata
WHERE to_tsvector('english',bookdata) ## to_tsquery('duke');
------------------------------------------
"The Tattooed Duke"
"She Tempts the Duke"
"The Duke Is Mine"
"What I Did For a Duke"
More documentation can be found here.
I'm working with mongoDB, and I used a wrapper mongo/Postegres.
Now, I can find my tables and data.
I want to do some statistics but I can't reach objects that got json type in Postgres.
My problem is that I got all the object in json but I need to separate the fields.
I used this :
CREATE FOREIGN TABLE rents( _id NAME, status text, "from" json )
SERVER mongo_server
OPTIONS (database 'tr', collection 'rents');
The field "from" is an object.
I found something like this :
enter code here
but nothing happened
The error (why a screenshot??) means that the data are not in valid json format.
As a first step, you could define the column as type text instead of json. Then querying the foreign table will probably work, and you can see what is actually returned and why PostgreSQL thinks that this is not valid JSON.
Maybe you can create a view on top of the foreign table that converts the value to valid JSON for further processing.
I have a label Person which contains millions of nodes. The nodes have some properties and I am trying to add a new property to the nodes from a CSV file.
I am trying to match them by the person's forename and surname but the query is too slow. The query is:
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM
'file:///personaldata.csv' AS line1
MATCH (p:Person {forename:line1.forename, surname:line1.surname})
SET p.newPersonNumber=line1.newPersonNumber
I left the query running for maybe an hour before I terminated it.
Am I doing something wrong?
Note that I have indexes on forename and surname .
Try profiling the query to see if it really uses the indices:
PROFILE
WITH "qwe" AS forename, "asd" AS surname
MATCH (p:Person {forename: forename, surname: surname})
RETURN p
If it doesn't, you can force it:
WITH "qwe" AS forename, "asd" AS surname
MATCH (p:Person {forename: forename, surname: surname})
USING INDEX p:Person(forename)
USING INDEX p:Person(surname)
RETURN p
As mentioned in the Cypher refcard (emphasis mine):
Index usage can be enforced, when Cypher uses a suboptimal index or more than one index should be used.
See also the chapter on USING.
Update
Since using multiple indices on the same node is not currently supported, let's focus back on why the query is slow, and whether it actually does something. You can profile the actual LOAD CSV for a subset, and see if the data matches anything:
PROFILE
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM 'file:///personaldata.csv' AS line1
WITH line1
LIMIT 10
OPTIONAL MATCH (p:Person {forename:line1.forename, surname:line1.surname})
RETURN p, line1.newPersonNumber
That way, you can check that the MATCH finds something (i.e. the forename and surname don't need trimming or something), and you can also check which index is more beneficial to the query: since only 1 index will be used, then the results will be filtered on the other property, and it'll be faster if you use the most discriminant index. If all the persons are Johns, you'd better use the index on surname, but if they're all Does, use the index on forename. If they're all John Does, you have a duplication problem... Anyway, comparing the numbers on the filtering steps between the 2 profiles (with either index) should give you an idea of the distribution of the indices.
Introduction first, question at the end. Please read carefully!
I have a master-detail relation between two tables:
CREATE TABLE [dbo].[LookupAttributes] (
[Id] int IDENTITY (1, 1) NOT NULL,
[Name] nvarchar (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL) ;
ALTER TABLE [dbo].[LookupAttributes] ADD CONSTRAINT [PK_LookupAttributes] PRIMARY KEY ([Identity]) ;
CREATE TABLE [dbo].[Lookup] (
[Id] int IDENTITY (1, 1) NOT NULL,
[LookupAttributesLink] int NOT NULL,
[Code] nvarchar (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[Value] nvarchar (80) COLLATE SQL_Latin1_General_CP1_CI_AS NULL) ;
ALTER TABLE [dbo].[Lookup] ADD CONSTRAINT [IX_Lookup] UNIQUE ([LookupAttributenLink], [Code]) ;
(There are more fields and indices in both tables but these are the ones that matter...)
The project I'm working on is meant to maintain data in 50+ tables, and every week this data is exported to XML to be used by some desktop application as source data. While I wanted to make this a pretty-looking application, it just needed to be done fast, thus I use a Dynamic Data Site so the data can be maintained. It works just fine, except for this table...
As it turns out, there are 600 different lookup records that share the same code, but different attributes. The DDS displays attribute and code correctly in the list of lookup records so there never was any confusion about which lookup record someone was editing. And this has been in use for over 2 years now.
Now the problem: A new table "Lookup-Override" has been added which links to the [Id] field of the Lookup table. Each record in this new table thus displays the [Code] field, but since [Code] isn't unique, it's unclear which Override record belongs to which Lookup record.
To solve this, I need to display more information from the Lookup record. Since the only unique set of fields is the attribute plus code, I need to display both. But displaying [LookupAttributesLink]+[Code] isn't an option either, since [LookupAttributesLink] is just a number.I need the DDS to display [Attributes].[LookupAttributesLink]+[Lookup].[Code] in a single column. Question is: how?I've considered adding a calculated field to the Lookup table, but I cannot get the attribute name that way.I could create a special page to maintain this table but I don't like that solution either, since it "breaks" the DDS principle in my opinion. I'm trying to avoid such pages.So, any other possibilities to get the site display both attribute name and lookup code in the override table?
The most interesting solution would be by using a calculated field which could retrieve the attribute name. How to do that?
Solved it myself! See answer below, which works just fine.
Found it! I had to do a few things:
CREATE FUNCTION LookupName (
#Attr int,
#Code nvarchar(255)
) RETURNS nvarchar(1000)
AS
BEGIN
DECLARE #Name nvarchar(1000)
SELECT #Name = Name
FROM [dbo].[LookupAttributes]
WHERE [Id]=#Attr;
RETURN #Name + '/' + #Code;
END
GO
ALTER TABLE [dbo].[lookup] ADD [Name] AS [dbo].[LookupName]([LookupAttributesLink], [Code])
GO
This will add an additional calculated field to the table which uses a function to calculate the proper name. I then had to add some metadata for the lookup table:
[MetadataType(typeof(LookupMetadata))]
public partial class Lookup { }
[DisplayColumn("Name", "Name")]
[DisplayName("Lookup")]
public class LookupMetadata
{
[ScaffoldColumn(false)]
public int Id;
[ScaffoldColumn(false)]
public object Name;
}
This will hide the Name column from the Lookup table itself, but it makes it visible for the Override table. (And it will be used to display the proper value.
Done this, solved the problem! Quite easy, actually. :-)