how to convert group by(values) to json - django - json

i'm trying to convert my grouped by (values in django) data into JsonResponse but it raise this error :
AttributeError: 'dict' object has no attribute 'f_type'
this is my function of loadding json data
def load_cate(request):
lists = Room.objects.values('f_type','room_type', 'beds', 'balcon').annotate(total=Count('pk')).order_by('-total')
data = []
for i in lists.values():
item = {
'wc_type':i.f_type,
'room_type':i.room_type,
'beds':i.beds,
'balcon':i.balcon,
'total':i.total
}
data.append(item)
return JsonResponse({'success':True,'data':data})
is there something i did wrong ? or its different for group by values ?!
thanks in advance ..

There is no need to loop throught objects. You just need to convert QuerySet into a list.
values() will return a QuerySet object which cant be return in a JsonResponse. So just convert into a list.
lists = Room.objects.values('f_type','room_type', 'beds', 'balcon').annotate(total=Count('pk')).order_by('-total')
lists = list(lists)

Related

How to convert multi dimensional array in JSON as separate columns in pandas

I have a DB collection consisting of nested strings . I am trying to convert the contents under "status" column as separate columns against each order ID in order to track the time taken from "order confirmed" to "pick up confirmed". The string looks as follows:
I have tried the same using
xyz_db= db.logisticsOrders -------------------------(DB collection)
df =pd.DataFrame(list(xyz_db.find())) ------------(JSON to dataframe)
Using normalize :
parse1=pd.json_normalize(df['status'])
It works fine in case of non nested arrays. But status being a nested array the output is as follows:
Using for :
data = df[['orderid','status']]
data = list(data['status'])
dfy = pd.DataFrame(columns = ['statuscode','statusname','laststatusupdatedon'])
for i in range(0, len(data)):
result = data[i]
dfy.loc[i] = [data[i][0],data[i][0],data[i][0],data[i][0]]
It gives the result in form of appended rows which is not the format i am trying to achieve
The output I am trying to get is :
Please help out!!
i share you which i used json read, maybe help you:
you can use two and more list
def jsonify(z):
genr = []
if z==z and z is not None:
z = eval(z)
if type(z) in (dict, list, tuple):
for dic in z:
for key, val in dic.items():
if key == "name":
genr.append(val)
else:
return None
else:
return None
return genr
top_genr['genres_N']=top_genr['genres'].apply(jsonify)

Converting mongoengine objects to JSON

i tried to fetch data from mongodb using mongoengine with flask. query is work perfect the problem is when i convert query result into json its show only fields name.
here is my code
view.py
from model import Users
result = Users.objects()
print(dumps(result))
model.py
class Users(DynamicDocument):
meta = {'collection' : 'users'}
user_name = StringField()
phone = StringField()
output
[["id", "user_name", "phone"], ["id", "user_name", "phone"]]
why its show only fields name ?
Your query returns a queryset. Use the .to_json() method to convert it.
Depending on what you need from there, you may want to use something like json.loads() to get a python dictionary.
For example:
from model import Users
# This returns <class 'mongoengine.queryset.queryset.QuerySet'>
q_set = Users.objects()
json_data = q_set.to_json()
# You might also find it useful to create python dictionaries
import json
dicts = json.loads(json_data)

Create json sub objects using python collections module

I am attempting to build a json document to be passed to an api as a post body request.
I am pulling data from a mssql as an arrray list, then creating an ordered dictionary using collections model. Need help on how to create sub object
import json, collections
rowarray_list = []
for row in rows:
t = (row.NameLine1, row.NameLine2, row.Phone, row.Mobile,
row.Fax, row.Slogan, row.Address, row.City, row.State, row.Zip,
row.Email, row.WebSite, row.ApplyOnline,
row.Preflight, row.Facebook, row.LinkedIn, row.Username)
rowarray_list.append(t)
objects_list = []
for row in rows:
d = collections.OrderedDict()
d['Name'] = row.NameLine1
d['Phone'] = row.Phone
d['Mobile'] = row.Mobile
d['Fax'] = row.Fax
d['Slogan']=row.Slogan
d['Address']=row.Address
d['City'] = row.City
d['State'] = row.State
d['Zip'] = row.Zip
d['Email']=row.Email
d['Website']=row.WebSite
d['ApplyOnline']=row.ApplyOnline
d['Preflight']=row.Preflight
d['Facebook']=row.Facebook
d['LinkedIn']=row.LinkedIn
d['Username']=row.Username
objects_list.append(d)
json.dumps(objects_list)
I want to have the json objects to be built like this:
{"type": "task1",
"body": {"Name": row.NameLine
"Phone": row.Phone
... }}
I cant seem to figure out how to do this
I was able to get this resolved by using a different approach. Creating a dictionary object per each row and appending it to dictionary list. Then used json library to create the full json object
rowarray_list=[]
for row in rows:
subdt = dict(Name=row.NameLine1, Title=row.NameLine2, Phone=row.Phone, Mobile=row.Mobile,
Fax=row.Fax, Slogan=row.Slogan, Address=row.Address, City=row.City, State=row.State, Zip=row.Zip, Email=row.Email, WebSite=row.WebSite, ApplyOnline=row.ApplyOnline,
Preflight=row.Preflight, Facebook=row.Facebook, LinkedIn=row.LinkedIn, Username=row.Username)
dt=dict(action='fetchView', body=subdt)
rowarray_list.append(dt)
print(json.dumps(rowarray_list))
cursor.close()

parse json nested dictionaries within array

How to convert
json_decode = [{"538":["1,2,3","hello world"]},{"361":["0,9,8","x,x,y"]}]
to
{"538":["1,2,3","hello world"],"361":["0,9,8","x,x,y"]}
in python?
If it is guaranteed that json_decode is a list of dictionaries, you can get your desired output with the following:
dict([list(x.items())[0] for x in json_decode])
I hope this helps.
I guess you use something like:
def merge_dicts(dict1, dict2):
return dict(list(dict1.items()) + list(dict2.items()))
l = [{"538":["1,2,3","hello world"]},{"361":["0,9,8","x,x,y"]}]
print merge_dicts(l[0], l[1])
Output:
{'361': ['0,9,8', 'x,x,y'], '538': ['1,2,3', 'hello world']}

Groovy csv to string

I am using Dell Boomi to map data from one system to another. I can use groovy in the maps but have no experience with it. I tried to do this with the other Boomi tools, but have been told that I'll need to use groovy in a script. My inbound data is:
132265,Brown
132265,Gold
132265,Gray
132265,Green
I would like to output:
132265,"Brown,Gold,Gray,Green"
Hopefully this makes sense! Any ideas on the groovy code to make this work?
It can be elegantly solved with groupBy and the spread operator:
#Grapes(
#Grab(group='org.apache.commons', module='commons-csv', version='1.2')
)
import org.apache.commons.csv.*
def csv = '''
132265,Brown
132265,Gold
132265,Gray
132265,Green
'''
def parsed = CSVParser.parse(csv, CSVFormat.DEFAULT.withHeader('code', 'color')
parsed.records.groupBy({ it.code }).each { k,v -> println "$k,\"${v*.color.join(',')}\"" }
The above prints:
132265,"Brown,Gold,Gray,Green"
Well, I don't know how are you getting your data, but here is a general way to achieve your goal. You can use a library, such as the one bellow to parse the csv.
https://github.com/xlson/groovycsv
The example for your data would be:
#Grab('com.xlson.groovycsv:groovycsv:1.1')
import static com.xlson.groovycsv.CsvParser.parseCsv
def csv = '''
132265,Brown
132265,Gold
132265,Gray
132265,Green
'''
def data = parseCsv(csv)
I believe you want to associate the number with various values of colors. So for each line you can create a map of the number and the colors associated with that number, splitting the line by ",":
map = [:]
for(line in data) {
number = line.split(',')[0]
colour = line.split(',')[1]
if(!map[number])
map[number] = []
map[number].add(colour)
}
println map
So map should contain:
[132265:["Brown","Gold","Gray","Green"]]
Well, if it is not what you want, you can extract the general idea.
Assuming your data is coming in as a comma separated string of data like this:
"132265,Brown 132265,Gold 132265,Gray 132265,Green 122222,Red 122222,White"
The following Groovy script code should do the trick.
def csvString = "132265,Brown 132265,Gold 132265,Gray 132265,Green 122222,Red 122222,White"
LinkedHashMap.metaClass.multiPut << { key, value ->
delegate[key] = delegate[key] ?: []; delegate[key] += value
}
def map = [:]
def csv = csvString.split().collect{ entry -> entry.split(",") }
csv.each{ entry -> map.multiPut(entry[0], entry[1]) }
def result = map.collect{ k, v -> k + ',"' + v.join(",") + '"'}.join("\n")
println result
Would print:
132265,"Brown,Gold,Gray,Green"
122222,"Red,White"
Do you HAVE to use scripting for some reason? This can be easily accomplished with out-of-the-box Boomi functionality.
Create a map function that prepends the ID field to a string of your choice (i.e. 222_concat_fields). Then use that value to set a dynamic process prop with that value.
The value of the process prop will contain the result of concatenating the name fields. Simply adding this function to your map should take care of it. Then use the final value to populate your result.
Well it depends upon the data how is it coming.
If the data which you have posted in the question is coming in a single document, then you can easily handle this in a map with groovy scripting.
If the data which you have posted in the question is coming into multiple documents i.e.
doc1: 132265,Brown
doc2: 132265,Gold
doc3: 132265,Gray
doc4: 132265,Green
In that case it cannot be handled into map. You will need to use Data Process Step with Custom Scripting.
For the code which you are asking to create in groovy depends upon the input profile in which you are getting the data. Please provide more information i.e. input profile, fields etc.