The Sheetsee library uses google spreadsheets as a data backend. I'm trying to publish my google spreadsheet as json so that I can access it using the Sheetsee library. The current 'Publish to the web' function available in google docs doesn't show any option to publish the data as json. Is this something that has been removed from Google Spreadsheets or is it available somewhere else in google docs?
First, you must publish your spreadsheet to the web, using File -> Publish To Web in your Google Spreadsheet.
You can then access your readable JSON API using the /api endpoint.
http://gsx2json.com/api?id=SPREADSHEET_ID&sheet=SHEET_NUMBER&q=QUERY
This will update live with changes to the spreadsheet.
Parameters :
id (required): The ID of your document. This is the big long aplha-numeric code in the middle of your document URL.
sheet (optional): The number of the individual sheet you want to get data from. Your first sheet is 1, your second sheet is 2, etc. If no sheet is entered then 1 is the default. Example
q (optional): A simple query string. This is case insensitive and will add any row containing the string in any cell to the filtered result. Example
integers (optional - default: true): Setting 'integers' to false will return numbers as a string (useful for decimal points). Example
rows (optional - default: true): Setting 'rows' to false will return only column data. Example
columns (optional - default: true): Setting 'columns' to false will return only row data.
Example Response:-
There are two sections to the returned data - Columns (containing each column as a data array), and Rows (containing each row of data as an object.
{
columns: {
name: [
"Nick",
"Chris",
"Barry"
],
age: [
21,
27,
67;
]
},
rows: [
{
name: "Nick",
age: 21
},
{
name: "Chris",
age: 27
},
{
name: "Barry",
age: 67
}
]
}
src="http://gsx2json.com/"
Related
I've parsed a json response from an API to a 2d array
and now that I've built the array I want to display it on my sheet however This is my dataarray
It has 413 rows and those rows have varying amounts of data (row 0 is the header row with 67 fields, but not every row has all 67 fields of data in it)
This is the code I'm using to try and write the data to my sheet (shProductData is a variable I defined earlier in the code to identify my sheet)
shProductData.getRange(1,1,dataArray.length,dataArray[0].length).setValues(dataArray);
However I get the error:
Exception: The number of columns in the data does not match the number of columns in the range. The data has 40 but the range has 67.
It writes the header row to my sheet first but then fails on the next one. Is there any way around this? Or am I going to somehow make my sub arrays all be 67 in size?
You can add empty cells at the end of short rows in the data this way:
var data = [
[1,2,3],
[4,5],
[6,]
]
// get max length of rows in the data
var max_length = Math.max(...data.map(x => x.length));
// add empty cells at end of short rows
data.forEach(row => { while (row.length < max_length) row.push('') } )
console.log(data); // output: [ [ 1, 2, 3 ], [ 4, 5, '' ], [ 6, '', '' ] ]
I've been trying to build an excel sheet about all the papers published by staffs and students of my university. I used Scopus API to retrieve all the information like Author, Title and publish dates and it worked perfectly.
Since the retrieved data was a JSON File I had to convert it to Excel, So I chose OpenRefine and when I converted the file it created multiple rows if the paper had more than one writers
For example Like
Sample Scopus
And My JSON response looks like
{
abstracts-retrieval-response: {
coredata: {
citedby-count: 0,
prism:volume: 430-431,
prism:pageRange: 240-246,
prism:coverDate: 2018-03-01,
dc:title: Solving the 3-COL problem by using tissue P systems without environment and proteins on cells,
prism:aggregationType: Journal,
prism:doi: 10.1016/j.ins.2017.11.022,
prism:publicationName: Information Sciences
},
authors: {
author: [
{
ce:given-name: Daniel,
preferred-name: {
ce:given-name: Daniel,
ce:initials: D.,
ce:surname: Díaz-Pernil,
ce:indexed-name: Díaz-Pernil D.
},
#seq: 1,
ce:initials: D.,
#_fa: true,
affiliation: {
#id: 60033284,
#href: http://api.elsevier.com/content/affiliation/affiliation_id/60033284
},
ce:surname: Díaz-Pernil,
#auid: 16645195100,
author-url: http://api.elsevier.com/content/author/author_id/16645195100,
ce:indexed-name: Diaz-Pernil D.
},
{
ce:given-name: Hepzibah A.,
preferred-name: {
ce:given-name: Hepzibah A.,
ce:initials: H.A.,
ce:surname: Christinal,
ce:indexed-name: Christinal H.
},
#seq: 2,
ce:initials: H.A.,
#_fa: true,
affiliation: {
#id: 60100082,
#href: http://api.elsevier.com/content/affiliation/affiliation_id/60100082
},
ce:surname: Christinal,
#auid: 57197875639,
author-url: http://api.elsevier.com/content/author/author_id/57197875639,
ce:indexed-name: Christinal H.A.
},
{
ce:given-name: Miguel A.,
preferred-name: {
ce:given-name: Miguel A.,
ce:initials: M.A.,
ce:surname: Gutiérrez-Naranjo,
ce:indexed-name: Gutiérrez-Naranjo M.
},
#seq: 3,
ce:initials: M.A.,
#_fa: true,
affiliation: {
#id: 60033284,
#href: http://api.elsevier.com/content/affiliation/affiliation_id/60033284
},
ce:surname: Gutiérrez-Naranjo,
#auid: 6506630834,
author-url: http://api.elsevier.com/content/author/author_id/6506630834,
ce:indexed-name: Gutierrez-Naranjo M.A.
}
]
}
}
}
So how do I combine all the authors into a single cell according to the Title?
After importing the JSON into OpenRefine, you need to organise the project into Records. See http://kb.refinepro.com/2012/03/difference-between-record-and-row.html for an explanation of the difference between 'rows' and 'records' in OpenRefine.
To get the project into records you need to move a column containing information that will only appear once in each record (e.g. the title column - which maybe labelled something like "_ - abstracts-retrieval-response - coredata - dc:title" based on the JSON you've pasted here) to the start of the project. See http://kb.refinepro.com/2012/06/create-records-in-google-refine.html for more information on creating records in OpenRefine.
Once you have done this, switch to the 'records' view (click the 'records' link towards the top left of the data table) and then do as #Ettore-Rizza mentions in his comment - pick the column containing the names you want to use (e.g. "_ - abstracts-retrieval-response - authors - author - _ - ce:indexed-name" column) and use the Edit cells -> Join Multi-valued Cells option from the drop down menu at the top of the column.
Because each author related to the article is described in the JSON with multiple fields including various name forms plus a URL) you'll need to either remove the other columns containing author info, or merge the multiple values into a single field using the 'Join Multi-value cells option on all the affected columns (unless you need to retain this information, it is much easier to remove the unwanted columns)
Once this is done, and assuming there are no other fields which have repeated data in the record, you should have a single row per title.
I am going to implement the REST base CRUD modal in my my app.I wan to display the list of product data with edit and delete link
Product
id, title, unit_id, product_type_id, currency_id,price
Q1: what should be json response look like?
There are two formats comes in my mind to place the data in Json as a response of REST Get call
[
{
id:1,
title:"T-Shirt",
unit_id:20,
unit_title: "abc"
product_type_id:30,
product_type_title:"xyz"
currency_id: 10,
currency_name: "USD"
min_price:20
},
{...}
]
and the another one is
[
{
id:1,
title:"T-Shirt",
unit: {
id: 20,
title: "abc"
},
product_type: {
id: 30,
title: "xyz"
},
currency_id: {
id:10,
name: "USD"
},
min_price:20
},
{...}
]
what is the better and standard way to handle the above scenario?
Furthermore, let suppose I have 10 more properties in product table which will never display on list page. but i needed it when user going to edit the specific item.
Q2: Should I the load all data once at the time of displaying product list and pass the data to edit component.
or
Load only the needed propeties of product table and pass the id to produt edit component and a new REST GET call with id to get the properties of product.
I am using React + Redux for my front end
Typically, you would create additional methods for API consumers to retrieve the values that populate the lists of currency, product_type and unit when editing in a UI.
I wouldn't return more data than necessary for an individual Product object.
How would you query VersionOne (V1) to build a report that contains (Backlog Items with assoicated tasks, defects and especially Attachments in C# for a given Project? Does anyone have C# V1 API/JSON example on how to do this? I need the the part that queries VersioOne and extracts the Attachment to a directory. I can do the reporting part.
Thanks,
Remy
I suggest using any C# HTTP library you like. Submit a query such as the following to ~/query.v1 . The query text may in a POST body or in a GET url parameter named query:
where:
Name: Whatever Project You Want
from: Scope
select:
- Name
- from: Workitems:PrimaryWorkitem
select:
- AssetType
- Number
- from: Attachments
select:
- Name
- Description
- ContentType
- Content
- from: Children:Task
select:
- Name
- Number
- AssetType
- from: Attachments
select:
- Name
- Description
- ContentType
- Content
Above, I select Attachment.Content which would yield a base64 blob in the output. The attachment content URLs are not present in any attribute that can be selected by query.v1, but you can create them by appending the Attachment id to ~/attachment.v1
Results will be returned in a straightforward hierarchical JSON response:
[
[
{
"_oid":"Scope:57460",
"Name":"openAgile",
"Workitems:PrimaryWorkitem": [
{
"_oid":"Story:83524",
"AssetType":"Story",
"Number":"S-08114",
"Attachments":[],
"Subs":[],
"Children:Task": [
{
"_oid":"Task:86578",
"Name":"Test Integration in Atlanta",
"Number":"TK-11051",
"AssetType":"Task"
},
{
"_oid":"Task:86581",
"Name":"Install In our Production environment",
"Number":"TK-11052",
"AssetType":"Task"
},
{
"_oid":"Task:86584",
"Name":"Document",
"Number":"TK-11053",
"AssetType":"Task"
}
]
},
]
}
]
]
You may also use the rest-1.v1 endpoint or our SDK library, but query.v1 is highly suggested for virtually any report or read-only query that it allows.
Considering the following data structures what would be better to QUERY the data once stored in a database system (rdbms or nosql)? The fields within the metadata field are user defined and will differ from user to user. Possible values are Strings, Number, "Dates" or even arrays.
var file1 = {
id: 123, name: "mypicture", owner: 1
metadata: {
people: ["Ben", "Tom"],
created: 2013/01/01,
license: "free",
rating: 4
...
},
tags: ["tag1", "tag2", "tag3", "tag4"]
}
var file2 = {
id: 155, name: "otherpicture", owner: 1
metadata: {
people: ["Tom", "Carla"],
created: 2013/02/02,
license: "free",
rating: 4
...
},
tags: ["tag4", "tag5"]
}
var file1OtherUser = {
id: 345, name: "mydocument", owner: 2
metadata: {
autors: ["Mike"],
published: 2013/02/02,
…
},
tags: ["othertag"]
}
Our users should have the ability to search/filter their files:
User 1: Show all files where "Tom" is in "people" array
User 1: Show all files "created" between 2013/01/01 and 2013/02/01
User 1: Show all files having "license" "free" and "rating" greater 2
User 2: Show all files "published" in "2012" and tagged with "important"
...
Results should be filtered in way like you can do in OS X with intelligent folders. The individual metadata fields are defined before files are being uploaded/stored. But they also may change after that, e.g. User 1 may rename the metadata field "people" to "cast".
As #WiredPrairie said, the field within the metadata field look variable, maybe dependant upon what the user enters which is supported by:
User 1 may rename the metadata field "people" to "cast".
MongoDB cannot create variable indexes whereby you just say that every new field in metadata gets added to the compound index, however you could do a key-value type structure like so:
var file1 = {
id: 123, name: "mypicture", owner: 1
metadata: [
{k: people, v:["Ben", "Tom"]},
{k: created, v:2013/01/01},
],
tags: ["tag1", "tag2", "tag3", "tag4"]
}
That is one method of doing this, allowing you to index on both k and v dynamically within the metadata field. You would then query by this like so:
db.col.find({metadata:{$elemMatch:{k:people,v:["Ben"]}}})
However this does introduce another problem. $elemMatch works on top level, not nested elements. Imagine you wanted to find all files where "Ben" was one of the people, you can't use $elemMatch here so you would have to do:
db.col.find({metadata.k:people,metadata.v:"Ben"})
The immediate problem with this query is in the way MongoDB queries. When it queries the metadata field it will say: where one field of "k" equals "people" and a field of "v" equals "Ben".
Since this is a multi-value field you could run into the problem where even though "Ben" is not in the peoples list, because he exists in another field on the metadata you actually pick out the wrong documents; i.e. this query would pick up:
var file1 = {
id: 123, name: "mypicture", owner: 1
metadata: [
{k: people, v:["Tom"]},
{k: created, v:2013/01/01},
{k: person, v: "Ben"}
],
tags: ["tag1", "tag2", "tag3", "tag4"]
}
The only real way to solve this is to factor off the dynamic fields to another collection where you don't have this problem.
This creates a new problem though, you can no longer get a full file with a single round trip and nor can you aggregate both the file row and its user defined fields in one go. So all in all you loose a lot of abilities by dong this.
That being said you can still perform quite a few queries, i.e.:
User 1: Show all files where "Tom" is in "people" array
User 1: Show all files "created" between 2013/01/01 and 2013/02/01
User 1: Show all files having "license" "free" and "rating" greater 2
User 2: Show all files "published" in "2012" and tagged with "important"
All of those would still be possible with this schema.
As for which is better -RDBMS or NoSQL; it is difficult to say here, I would say both could be quite good, if done right, at querying this structure.