so I am importing a JSON file of (Example) Users into Realtime Database and have noticed a slight issue.
When added, each User is sorted by the number order they are in on the JSON File. Since it starts with User 4, it has the value of 1. As seen here:
.
The User Json is formatted as such:
{
"instagram": "null",
"invited_by_user_profile": "null",
"name": "Rohan Seth",
"num_followers": 4187268,
"num_following": 599,
"photo_url": "https://clubhouseprod.s3.amazonaws.com:443/4_b471abef-7c14-43af-999a-6ecd1dd1709c",
"time_created": "2020-03-17T07:51:28.085566+00:00",
"twitter": "rohanseth",
"user_id": 4,
"username": "rohan"
}
Is there some easy way to make it so the titles in Firebase are the User ID's of each user instead of the numbers currently used?
When you import a JSON into the Firebase Realtime Database, it uses whatever keys exist in the JSON. There is no support to remap the keys during the import.
But of course you can do this with some code:
For example, you can change the JSON before you import it, to have the keys you want.
You can read the JSON in a small script, and then insert the data into Firebase through its API.
I'd recommend against using sequential numeric IDs as keys though. To learn why, have a look at this blog post: Best Practices: Arrays in Firebase..
Related
We are currently using Pact-Broker in our Spring Boot application with really good results for our integration tests.
Our tests using Pact-Broker are base in a call to a REST API and comparing the response with the value in our provider, always using JSON format.
Our problem is that the values to compare are in a DB where the data is changing quite often, which make us update the tests really often.
Do you know if it is possible to just validate by the data type?
What we would like to try is to validate that the JSON is properly formed and the data type match, for example, if our REST API gives this output:
[
{
"action": "VIEW",
"id": 1,
"module": "A",
"section": "pendingList",
"state": null
},
{
"action": "VIEW",
"id": 2,
"module": "B",
"section": "finished",
"state": null
}
}
]
For example, what we would like to validate from the previous output is the following:
The JSON is well formed.
All the keys / value pair exists based in the model.
The value match a specific data type, for example, that the key action exist in all the entries and contains a string data type.
Do you know if this is possible to be accomplished with Pact-Broker? I was searching in the documentation but I did not found any example of how to do it.
Thanks a lot in advance.
Best regards.
Absolutely! The first 2 things Pact will always do without any extra work.
What you are talking about is referred to as flexible matching [1]. You don't want to match the value, but the type (or a regex). Given you are using Spring Boot, you may want to look at the various matchers available for Pact JVM [2].
I'm not sure if you meant it, but just for clarity, Pact and Pact Broker are separate things. Pact is the Open Source contract-testing framework, and Pact Broker [3] is a tool to help share and collaborate on those contracts with the team.
[1] https://docs.pact.io/getting_started/matching
[2] https://github.com/DiUS/pact-jvm/tree/master/consumer/pact-jvm-consumer#dsl-matching-methods
[3] https://github.com/pact-foundation/pact_broker/
Firebase creates a name for the data i upload from matlab.
is there a way to cancel this name? or set it to something constant so the next time i upload ill overwrite it?
Example:
https://cdn1.imggmi.com/uploads/2019/3/24/0cb9e3c19155a8b338806121aed42ea2-full.jpg
(i want the data from matlab to be the same structure like the adc sample)
This is the code I use:
Firebase_Url = 'https://***.firebaseio.com/data_from_matlab.json/';
response = webwrite(Firebase_Url,'{ "first": "Jack", "last": "Sparrow" }')
It looks like Matlab's webwrite function sends a HTTP POST request, which Firebase's REST API translates to create a new node with a new unique ID.
It looks like you can pass RequestMethod: 'put' in the weboptions parameter to send a PUT request, which Firebase translation to a direct write at the location. So something like:
webwrite(Firebase_Url,'{ "first": "Jack", "last": "Sparrow" }',
weboptions("RequestMethod", "put"))
I actually was having a similar problem but I wanted to add multiple objects with different names and when I used RequestMethod: 'put' in weboptions Firebase deleted my old objects. I looked into the link given above I discovered that using RequestMethod: 'patch' I could add multiple objects under the same category without getting the randomly generated key.
I'm trying to pass a list of the following objects as query params to a GET call to my Java service:
{
"id": "123456",
"country": "US",
"locale": "en_us"
}
As a url, this would like like
GET endpoint.com/entity?id1=123456&country1=US&locale1=en_us&id2=...
What's the best way to handle this as a service? If I'm passing potentially 15 of these objects, is there a concise way to take in these parameters and convert them to Java objects on the server side?
I imagine with a URL like this, the service controller would have a lot of #QueryParams...
Create the entire dataset as JSON array, e.g.
[
{
"id": "123456",
"country": "US",
"locale": "en_us"
},
{
"id": "7890",
"country": "UK",
"locale": "en_gb"
}
]
base64 encode it and pass it as a parameter, e.g.
GET endpoint.com/entity?set=BASE64_ENCODED_DATASET
then decode on the server and parse the JSON array into Java objects using perhaps Spring Boot.
Based on the valid URL size comment (although 2000 is usable), you could put the data in a header instead, which can be from 8-16kb depending on the server. GETting multiple resources at once is going to involve compromise somewhere in the design.
As Base64 can contain +/= you can url encode it too although I haven't found the need to do this in practice when using this technique in SAML.
Another approach would be to compromise on searching via country and locale specific IDs:
GET endpoint.com/entity/{country}/{locale}/{id_csv}
so you would search like this:
GET endpoint.com/entity/US/en_us/123456,0349,23421
your backend handles (if using Spring) as #PathParam for {country} and {locale} and it splits {id_csv} to get the list of IDs for that country/locale combination.
To get another country/locale search:
GET endpoint.com/entity/UK/en_gb/7890,234,123232
URLs are much smaller but you can't query the entire dataset in one go as you need to query based on country/locale each time.
It looks like your GET is getting multiple resources from the server. I'd consider refactoring to GET 1 resource from the server per GET request. If this causes performance issues, consider using HTTP caching.
I use pySpark (Spark 2.1.0) and have event log files in S3.
Event logs has json format (each line represents valid json) and each line has event_id property. Remaining properties vary pretty much depends on event_id.
e.g.)
{"event_id": "1001", "account_id": 1, "name": "John"}
{"event_id": "1004", "account_id": 2, "purchase_id": 5, "purchase_num": 1}
If I load this json file into DF at once, it seems all columns are included as schema properties for all rows. But what I want to achieve is divide the input JSON data into each event_id's rows, set a proper schema for each event_id with parquet format, then save files individually to analyze each events from the dashboard.
I came up with the solution that specify each schema and load whole json files as many times as a number of event_id, but thought this is not efficient. I can also allow all rows included, but I'm not sure if that is the common way.
Is there any idiomatic and efficient way to achieve this?
This question has plagued me for months, now, and no matter how many articles and topics I read, I've gotten no good information...
I want to send a request to a server which returns a JSON file. I want to take those results and load them into tables on my local machine. Preferably Access or Excel so I can sort and manipulate the data.
Is there a way to do this...? Please help!!
Google comes up with this: json2excel.
Or write your own little application.
EDIT
I decided to be nice and write a python3 application for you. Use on the command line like this python jsontoxml.py infile1.json infile2.json and it will output infile1.json.xml and infile2.json.xml.
#!/usr/bin/env python3
import json
import sys
import re
from xml.dom.minidom import parseString
if len(sys.argv) < 2:
print("Need to specify at least one file.")
sys.exit()
ident = " " * 4
for infile in sys.argv[1:]:
orig = json.load(open(infile))
def parseitem(item, document):
if type(item) == dict:
parsedict(item, document)
elif type(item) == list:
for listitem in item:
parseitem(listitem, document)
else:
document.append(str(item))
def parsedict(jsondict, document):
for name, value in jsondict.items():
document.append("<%s>" % name)
parseitem(value, document)
document.append("</%s>" % name)
document = []
parsedict(orig, document)
outfile = open(infile + ".xml", "w")
xmlcontent = parseString("".join(document)).toprettyxml(ident)
#http://stackoverflow.com/questions/749796/pretty-printing-xml-in-python/3367423#3367423
xmlcontent = re.sub(">\n\s+([^<>\s].*?)\n\s+</", ">\g<1></", xmlcontent, flags=re.DOTALL)
outfile.write(xmlcontent)
Sample input
{"widget": {
"debug": "on",
"window": {
"title": "Sample Konfabulator Widget",
"name": "main_window",
"width": 500,
"height": 500
},
"image": {
"src": "Images/Sun.png",
"name": "sun1",
"hOffset": 250,
"vOffset": 250,
"alignment": "center"
},
"text": {
"data": "Click Here",
"size": 36,
"style": "bold",
"name": "text1",
"hOffset": 250,
"vOffset": 100,
"alignment": "center",
"onMouseUp": "sun1.opacity = (sun1.opacity / 100) * 90;"
}
}}
Sample output
<widget>
<debug>on</debug>
<window title="Sample Konfabulator Widget">
<name>main_window</name>
<width>500</width>
<height>500</height>
</window>
<image src="Images/Sun.png" name="sun1">
<hOffset>250</hOffset>
<vOffset>250</vOffset>
<alignment>center</alignment>
</image>
<text data="Click Here" size="36" style="bold">
<name>text1</name>
<hOffset>250</hOffset>
<vOffset>100</vOffset>
<alignment>center</alignment>
<onMouseUp>
sun1.opacity = (sun1.opacity / 100) * 90;
</onMouseUp>
</text>
</widget>
It's probably overkill, but MongoDB uses JSON-style documents as it's native format. That means you can insert your JSON data directly with little or no modifications. It can handle JSON data on its own, without you having to jump through hoops to force your data into a more RDBMS-friendly format.
It is open source software and available for most major platforms. It can also handle extreme amounts of data and multiple servers.
Its command shell is probably not as easy to use as Excel or Access, but it can do sorting etc on its own, and there are bindings for most programming languages (e.g. C, Python and Java) if you find that you need to do more tricky stuff.
EDIT:
For importing/exporting data from/to other more common formats MongoDB has a couple of useful utilities. CSV is supported, although you should keep in mind that JSON uses structured objects and it is not easy to come up with a direct mapping to a table-based model like CSV, especially on a schema-free database like MongoDB.
Converting JSON to CSV or any other RDBMS-friendly format comes close to (if it does not outright enter) the field or Object-Relational Mapping which in general is neither simple nor something that can be easily automated.
The MongoDB tools, for example, allow you to create CSV files, but you have to specify which field will be in each collumn, implicitly assuming that there is in fact some kind of schema in your data.
MongoDB allows you to store and manipulate structured JSON data without having to go through a cumbersome mapping process than can be very frustrating. You would have to modify your way of thinking, moving a bit away from the conventional tabular view of databases, but it allows you to work on the data as it is intended to be worked on, rather than try to force the tabular model on it.
Json (like xml) is a tree rather than a literal table of elements. You will need to populate the table by hand (essentially doing a stack of SQL LEFT JOINS) or populate a bunch of tables and manipulate the joins by hand.
Or is the JSON flat packed? It MAY be possible to do what you're asking, I'm just pointing out that there's no guarantee.
If it's a quick kludge, and the data is flatpacked then a quick script to read the json, dump to csv and then open in Excel will probably be easiest.
Storing in Access or Excel can not be done easily I guess. You would have to essentially parse the json string with any programming language that supports it (PHP, NodeJS, Python, .. all have native support for it) and then use a library to output an Excel sheet with the data.
Something else that could be an option depending on how versed you are with programming languages is to use something like the ElasticSearch search engine or the CouchDB database that both support json input natively. You could then use them to query the content in various ways.
I've kinda done that before. Turn JSON into HTML table. that means, you can turn into csv.
However here are something you need to know
1) JSON data must be well format into predefined structure. e.g.
{
[
['col1', 'col2', 'col3'],
[data11, data12, data13],
...
]
}
2) U have to parse the data row by row, column by column. and you have to take care of missing data or unmatch column, if possible. Of course, you have to aware of data type.
3) My experience is, if you have ridicuously large data, then doing that will kill client's browser. You have to progressively get formatted HTML or CSV data from server.
as suggested by nightcracker above, try the google tool. :)