I have a JSON list which captures one to many relationships.
For example, School can have multiple Class objects and Class can have multiple Student objects, but Student only belongs to one Class and one School:
{
"School": [ {
"id": 1,
"name": "Grad School",
"Class": [ {
"name": 101,
"Student": [ {
"name": 501,
"propertyA": "test"
}]
}]
}]
}
I am trying to convert this JSON example into an appropriate schema but the nesting is causing issues. Apollo appears to be able to help but the example below isn't very descriptive:
https://launchpad.graphql.com/4nqqqmr19
I'm looking for suggestions on how to handle this situation, whether that be through a JSON schema converter (which handles nested situations) or other.
I think you issue is not really the schema, which to me looks straightforward:
You have these types (everything dummy code as you have not specified in what language/framework you want to provide the GraphQL-Api):
SchoolType
id ID
name String
classes [Class]
students [Students]
ClassType
id ID
name String
school School
students [Student]
StudentType
id ID
name String
class Class
school School
Then we need an entry point
classQueryType
name "school"
argument :id, ID
resolve do
schools.where(id: argument["id"])
So we have the schema. The bigger work is probably to get the different types to access the JSON Schema in a way that the types above work.
So let's say, we read the JSON data somehow, with the structure you have.
const DATA = JSON.parse("your-example.json")
We need to convert this into different collections of objects, so we can query them dynamically:
schools = []
classes = []
people = []
def build_schools(data)
data.schools.for_each do |school|
schools.push(
name: school.name,
id: school.id,
classes: build_classes(school)
)
end
end
def build_classes(school)
ids = []
school.classes.for_each do |class|
ids.push(class.id)
classes.push(
id: class.id
name: class.name
school_id: school.id # you create your own references, to associate these objects
students: build_students(class)
)
end
return ids
end
...
But then you still need to hook this up, with your type system. Which means to write your resolvers:
For example on the StudentType
StudentType
id ID
name String
class Class
school School
resolve(object) ->
school_id = students.where(id: object.id).class_id.school_id
schools.where(id: school_id)
Related
The long title contain also a mini-exaple because I couldn't explain well what I'm trying to do. Nonethless, the similar questions windows led me to various implementation. But since I read multiple times that it's a bad design, I would like to ask if what I'm trying to do is a bad design rather asking how to do it. For this reason I will try to explain my use case with a minial functional code.
Suppose I have a two classes, each of them with their own parameters:
class MyClass1:
def __init__(self,param1=1,param2=2):
self.param1=param1
self.param2=param2
class MyClass2:
def __init__(self,param3=3,param4=4):
self.param3=param3
self.param4=param4
I want to print param1...param4 as a string (i.e. "param1"..."param4") and not its value (i.e.=1...4).
Why? Two reasons in my case:
I have a GUI where the user is asked to select one of of the class
type (Myclass1, Myclass2) and then it's asked to insert the values
for the parameters of that class. The GUI then must show the
parameter names ("param1", "param2" if MyClass1 was chosen) as a
label with the Entry Widget to get the value. Now, suppose the
number of MyClass and parameter is very high, like 10 classes and 20
parameters per class. In order to minimize the written code and to
make it flexible (add or remove parameters from classes without
modifying the GUI code) I would like to cycle all the parameter of
Myclass and for each of them create the relative widget, thus I need
the paramx names under the form od string. The real application I'm
working on is even more complex, like parameter are inside other
objects of classes, but I used the simpliest example. One solution
would be to define every parameter as an object where
param1.name="param1" and param1.value=1. Thus in the GUI I would
print param1.name. But this lead to a specifi problem of my
implementation, that's reason 2:
MyClass1..MyClassN will be at some point printed in a JSON. The JSON
will be a huge file, and also since it's a complex tree (the example
is simple) I want to make it as simple as possibile. To explain why
I don't like to solution above, suppose this situation:
class MyClass1:
def init(self,param1,param2,combinations=[]):
self.param1=param1
self.param2=param2
self.combinations=combinations
Supposse param1 and param2 are now list of variable size, and
combination is a list where each element is composed by all the
combination of param1 and param2, and generate an output from some
sort of calculation. Each element of the list combinations is an
object SingleCombination,for example (metacode):
param1=[1,2] param2=[5,6] SingleCombination.param1=1
SingleCombination.param2=5 SingleCombination.output=1*5
MyInst1.combinations.append(SingleCombination).
In my case I will further incapsulated param1,param2 in a object
called parameters, so every condition will hace a nice tree with
only two object, parameters and output, and expanding parameters
node will show all the parameters with their value.
If I use JSON pickle to generate a JSON from the situation above, it
is nicely displayed since the name of the node will be the name of
the varaible ("param1", "param2" as strings in the JSON). But if I
do the trick at the end of situation (1), creating an object of
paramN as paramN.name and paramN.value, the JSON tree will become
ugly but especially huge, because if I have a big number of
condition, every paramN contains 2 sub-element. I wrote the
situation and displayed with a JSON Viewer, see the attached immage
I could pre processing the data structure before creating the JSON,
the problem is that I use the JSON to recreate the data structure in
another session of the program, so I need all the pieces of the data
structure to be in the JSON.
So, from my requirements, it seems that the workround to avoid print the variable names creates some side effect on the JSON visualization that I don't know how to solve without changing the logic of my program...
If you use dataclasses, getting the field names is pretty straightforward:
from dataclasses import dataclass, fields
#dataclass
class MyClass1:
first:int = 4
>>> fields(MyClass1)
(Field(name='first',type=<class 'int'>,default=4,...),)
This way, you can iterate over the class fields and ask your user to fill them. Note the field has a type, which you could use to eg ask the user for several values, as in your example.
You could add functions to extract programatically the param names (_show_inputs below ) from the class and values from instances (_json below ):
def blossom(cls):
"""decorate a class with `_json` (classmethod) and `_show_inputs` (bound)"""
def _json(self):
return json.dumps(self, cls=DataClassEncoder)
def _show_inputs(cls):
return {
field.name: field.type.__name__
for field in fields(cls)
}
cls._json = _json
cls._show_inputs = classmethod(_show_inputs)
return cls
NOTE 1: There's actually no need to decorate the classes with blossom. You could just use its internal functions programatically.
Using a custom json encoder to dump the dataclass objects, including properties:
import json
class DataClassPropEncoder(json.JSONEncoder): # https://stackoverflow.com/a/51286749/7814595
def default(self, o):
if is_dataclass(o):
cls = type(o)
# inject instance properties
props = {
name: getattr(o, name)
for name, value in cls.__dict__.items() if isinstance(value, property)
}
return {
**props,
**asdict(o)
}
return super().default(o)
Finally, wrap the computations inside properties so they are
serialized as well when using the decorated class. Full code example:
from dataclasses import asdict
from dataclasses import dataclass
from dataclasses import fields
from dataclasses import is_dataclass
import json
from itertools import product
from typing import List
class DataClassPropEncoder(json.JSONEncoder): # https://stackoverflow.com/a/51286749/7814595
def default(self, o):
if is_dataclass(o):
cls = type(o)
props = {
name: getattr(o, name)
for name, value in cls.__dict__.items() if isinstance(value, property)
}
return {
**props,
**asdict(o)
}
return super().default(o)
def blossom(cls):
def _json(self):
return json.dumps(self, cls=DataClassEncoder)
def _show_inputs(cls):
return {
field.name: field.type.__name__
for field in fields(cls)
}
cls._json = _json
cls._show_inputs = classmethod(_show_inputs)
return cls
#blossom
#dataclass
class MyClass1:
param1:int
param2:int
#blossom
#dataclass
class MyClass2:
param3: List[str]
param4: List[int]
def _compute_single(self, values): # TODO: implmement this
return values[0]*values[1]
#property
def combinations(self):
# TODO: cache if used more than once
# TODO: combinations might explode
field_names = []
field_values = []
cls = type(self)
for field in fields(cls):
field_names.append(field.name)
field_values.append(getattr(self, field.name))
results = []
for values in product(*field_values):
result = {
**{
field_names[idx]: value
for idx, value in enumerate(values)
},
"output": self._compute_single(values)
}
results.append(result)
return results
>>> print(f"MyClass1:\n{MyClass1._show_inputs()}")
MyClass1:
{'param1': 'int', 'param2': 'int'}
>>> print(f"MyClass2:\n{MyClass2._show_inputs()}")
MyClass2:
{'param3': 'List', 'param4': 'List'}
>>> obj_1 = MyClass1(3,4)
>>> print(f"obj_1:\n{obj_1._json()}")
obj_1:
{"param1": 3, "param2": 4}
>>> obj_2 = MyClass2(["first", "second"],[4,2])._json()
>>> print(f"obj_2:\n{obj_2._json()}")
obj_2:
{"combinations": [{"param3": "first", "param4": 4, "output": "firstfirstfirstfirst"}, {"param3": "first", "param4": 2, "output": "firstfirst"}, {"param3": "second", "param4": 4, "output": "secondsecondsecondsecond"}, {"param3": "second", "param4": 2, "output": "secondsecond"}], "param3": ["first", "second"], "param4": [4, 2]}
NOTE 2: If you need to perform several computations per class, it might be a good idea to abstract away the pattern in the combinations property to avoid repeating code.
NOTE 3: If you need access to the properties several times and not ust once, you might want to consider caching their values to avoid re-computation.
Once you have an instance of MyClass / MyClass2, you can call vars() or vars().keys() and it will give you the attributes as a str. Unlike dir, it will not show all the builtin attributes/methods starting with __.
class MyClass2:
def __init__(self,param3=3,param4=4):
self.param3=param3
self.param4=param4
instance_of_myclass2 = MyClass2(param3="what", param4="ever")
print(vars(instance_of_myclass2))
{'param3': 'what', 'param4': 'ever'}
print(vars(instance_of_myclass2).keys())
dict_keys(['param3', 'param4'])
dir(instance_of_myclass2)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'param3', 'param4']
I have two mysql tables: Owners & Pets
Owner case class:
Owner(id: Int, name: String, age: Int)
Pet case class:
Pet(id: Int, ownerId: Int, type: String, name: String)
I want to create out of those tables list of OwnerAndPets:
case class OwnerAndPets(ownerId: Int,
name: String,
age: String,
pets: List[Pet])
(its for migrations purposes, I want to move those tables to be a collection of mongodb, which the collection documents would be OwnerAndPets objects)
I have two issues:
when I use join with quill on Owner & Pet I get list of tuples [(Owner, Pet)]
and if I have few pets for an owner I will get:
[(Owner(1, "john", 30), Pet(3,1,"dog","max")),
(Owner(1, "john", 30), Pet(4,1,"cat","snow"))]
I need it as (Owner(1, "john", 30), [Pet(3,1,"dog","max"), Pet(4,1,"cat","snow")])
how can I make it like this?
when I use join with quill on Owner & Pet I will not get owners that dont have pets and its fine cause this is what it supposed to be, but in my script in this case I would want to create object like:
OwnerAndPets(Owner(2, "mark", 30), List[])
Would appreciate any help, thanks
this is my join query:
query[Owner].join(query[Pet]).on((o, p) => o.id == p.o_id)
Your question highlights one of the major differences between FRM (Functional Relational Mapping) systems like Quill and Slick as opposed to ORMs like Hibernate. The purpose of FRM systems is not to build a particular domain-specific object hierarchy e.g. OwnersAndPets, but rather, to be able translate a single database query into some set of objects that can reasonably be pulled out of that single query's result set - this is typically a tuple. This means it is up to you to join the tuples (Owner_N, Pet_1-N) object into a single OwnersAndPets object in memory. Typically this can be done via groupBy and map operators:
run(query[Owner].join(query[Pet]).on((o, p) => o.id == p.o_id))
.groupBy(_._1)
.map({case (owner,ownerPetList) =>
OwnerAndPets(
owner.id,owner.name,owner.age+"", // Not sure why you made 'age' a String in OwnerAndPets
ownerPetList.map(_._2))
})
That said, there are some database vendors (e.g. Postgres) that internally implement array types so in some cases you can do the join on the database-level but this is not the case for MySQL and many others.
I have a SqlAlchemy model defined
from sqlalchemy.dialects.postgresql import JSONB
class User(db.Model):
__tablename__ = "user"
id = db.Column(db.Integer, primary_key=True)
nickname = db.Column(db.String(255), nullable=False)
city = db.Column(db.String(255))
contact_list = db.Column(JSONB)
created_at = db.Column(db.DateTime, default=datetime.utcnow)
def add_user():
user = User(nickname="Mike")
user.contact_list = [{"name": "Sam", "phone": ["123456", "654321"]},
{"name": "John", "phone": ["159753"]},
{"name": "Joe", "phone": ["147889", "98741"]}]
db.session.add(user)
db.session.commit()
if __name__ == "__main__":
add_user()
How can I retrieve the name from my contact_list using phone? For example, I have the 147889, how can I retrieve Joe?
I have tried this
User.query.filter(User.contact_list.contains({"phone": ["147889"]})).all()
But, it returns me an empty list, []
How can I do this?
You just forgot that your JSON path should include the outermost array as well:
User.query.filter(User.contact_list.contains([{"phone": ["147889"]}])).all()
will return the user you are looking for. The original query would match, if your JSON contained an object with key "phone" etc. Note that this returns the User object in question, not the specific object/name from the JSON structure. If you want that, as seems to be the end goal, you could expand the array elements of each user, filter based on the resulting records, and select the name:
val = db.column('value', type_=JSONB)
db.session.query(val['name'].astext).\
select_from(User,
db.func.jsonb_array_elements(User.contact_list).alias()).\
filter(val.contains({"phone": ["147889"]})).\
all()
On the other hand the above query is not as index friendly as the first one can be, because it has to expand all the arrays before filtering, so it might be beneficial to first find the users that contain the phone in their contact list in a subquery or CTE, and then expand and filter.
I am trying to make a localized version of this app: SMS Broadcast Ruby App
I have been able to get the JSON data from a local file & sanitize the number as well as open the JSON data. However I have been unable to extract the values and pair them as a scrubbed hash. Here's what I have so far.
def data_from_spreadsheet
file = open(spreadsheet_url).read
JSON.parse(file)
end
def contacts_from_spreadsheet
contacts = {}
data_from_spreadsheet.each do |entry|
puts entry['name']['number']
contacts[sanitize(number)] = name
end
contacts
end
Here's the JSON data sample I'm working with.
[
{
"name": "Michael",
"number": 9045555555
},
{
"name": "Natalie",
"number": 7865555555
}
]
Here's how I would like the JSON to be expressed after the contacts_from_spreadsheet method.
{
'19045555555' => 'Michael',
'19045555555' => 'Natalie'
}
Any help would be much appreciated.
You could create array of pairs (hashes) using map and then call reduce to get a single hash.
data = [{
"name": "Michael",
"number": 9045555555
},
{
"name": "Natalie",
"number": 7865555555
}]
data.map{|e| {e[:number] => e[:name]}}.reduce Hash.new, :merge
Result: {9045555555=>"Michael", 7865555555=>"Natalie"}
You don't seem to have number or name extracted in any way. I think first you'll need to update your code to get those details.
i.e. If entry is a JSON object (or rather was before parsing), you can do the following:
def contacts_from_spreadsheet
contacts = {}
data_from_spreadsheet.each do |entry|
contacts[sanitize(entry['number'])] = entry['name']
end
contacts
end
Not really keeping this function within JSON, but I have solved the problem. Here's what I used.
def data_from_spreadsheet
file = open(spreadsheet_url).read
YAML.load(file)
end
def contacts_from_spreadsheet
contacts = {}
data_from_spreadsheet.each do |entry|
name = entry['name']
number = entry['phone_number'].to_s
contacts[sanitize(number)] = name
end
contacts
end
This returned back clean array here:
{"+19045555555"=>"Michael", "+17865555555"=>"Natalie"}
Thanks everyone who added input!
I have a function which returns json data as history from Version of reversion.models.
from django.http import HttpResponse
from reversion.models import Version
from django.contrib.admin.models import LogEntry
import json
def history_list(request):
history_list = Version.objects.all().order_by('-revision__date_created')
data = []
for i in history_list:
data.append({
'date_time': str(i.revision.date_created),
'user': str(i.revision.user),
'object': i.object_repr,
'field': i.revision.comment.split(' ')[-1],
'new_value_field': str(i.field_dict),
'type': i.content_type.name,
'comment': i.revision.comment
})
data_ser = json.dumps(data)
return HttpResponse(data_ser, content_type="application/json")
When I run the above snippet I get the output json as
[{"type": "fruits", "field": "colour", "object": "anyobject", "user": "anyuser", "new_value_field": "{'price': $23, 'weight': 2kgs, 'colour': 'red'}", "comment": "Changed colour."}]
From the function above,
'comment': i.revision.comment
returns json as "comment": "changed colour" and colour is the field which I have written in the function to retrieve it from comment as
'field': i.revision.comment.split(' ')[-1]
But i assume getting fieldname and value from field_dict is a better approach
Problem: from the above json list I would like to filter new_field_value and old_value. In the new_filed_value only value of colour.
Getting the changed fields isn't as easy as checking the comment, as this can be overridden.
Django-reversion just takes care of storing each version, not comparing.
Your best option is to look at the django-reversion-compare module and its admin.py code.
The majority of the code in there is designed to produce a neat side-by-side HTML diff page, but the code should be able to be re-purposed to generate a list of changed fields per object (as there can be more than one changed field per version).
The code should* include a view independent way to get the changed fields at some point, but this should get you started:
from reversion_compare.admin import CompareObjects
from reversion.revisions import default_revision_manager
def changed_fields(obj, version1, version2):
"""
Create a generic html diff from the obj between version1 and version2:
A diff of every changes field values.
This method should be overwritten, to create a nice diff view
coordinated with the model.
"""
diff = []
# Create a list of all normal fields and append many-to-many fields
fields = [field for field in obj._meta.fields]
concrete_model = obj._meta.concrete_model
fields += concrete_model._meta.many_to_many
# This gathers the related reverse ForeignKey fields, so we can do ManyToOne compares
reverse_fields = []
# From: http://stackoverflow.com/questions/19512187/django-list-all-reverse-relations-of-a-model
changed_fields = []
for field_name in obj._meta.get_all_field_names():
f = getattr(
obj._meta.get_field_by_name(field_name)[0],
'field',
None
)
if isinstance(f, models.ForeignKey) and f not in fields:
reverse_fields.append(f.rel)
fields += reverse_fields
for field in fields:
try:
field_name = field.name
except:
# is a reverse FK field
field_name = field.field_name
is_reversed = field in reverse_fields
obj_compare = CompareObjects(field, field_name, obj, version1, version2, default_revision_manager, is_reversed)
if obj_compare.changed():
changed_fields.append(field)
return changed_fields
This can then be called like so:
changed_fields(MyModel,history_list_item1, history_list_item2)
Where history_list_item1 and history_list_item2 correspond to various actual Version items.
*: Said as a contributor, I'll get right on it.