I'm trying to implement a footerCallback in DataTables that do a conditional sum of some columns, based on a cell that's in a different column in the same row.Can anyone help me with this? I have used below code and check alert(cur_index); but I think it is not working as expected. And I did not get a correct sum of values of a column. My code is:
pageTotal6 = api
.column( 6, { page: 'current'} )
.data()
.reduce( function (a, b) {
var cur_index = api.column(6).data().indexOf(b);
alert(cur_index);
alert(api.column(3).data()[cur_index]);
if (api.column(3).data()[cur_index] != "Pending review") {
return parseInt(a) + parseInt(b);
}
else { return parseInt(a); }
return intVal(a) + intVal(b);
}, 0 );
And in 3rd column I have some repeated value and I want sum only for distinct value from 3rd column. How can I do this 2 thing using datatable & html
Theres two ways you can go about this.
First Method
(I will assume you are reading JSON data from Database [ViewModel] in C#, and using server-side processing)
Using the image below as reference to how I solved the problem
I wanted to sum of the "Amount" column where "Measure Type" (last column) != 99. First thing I did with the ViewModel that was going to pass the list to my JSON object was add a column sum column that didnt read any MeasureType = 99 rows from the table.
So essentially my JSON object has two columns that read the Amount column data, one is visible that you see in the image that has all figures and another invisible that only reads the values I want to sum in my footer.
while (MyDataReader.Read())
{
//get all other columns
//column with amount figures measuretype != 99
if (reportData.q_measuretype != 99)
{
reportData.amountNo99 = Convert.ToDecimal(String.Format("{0:0.00}", read["q_amount"]));
}
else
{
reportData.amountNo99 = 0;
}
list.Add(reportData);
}
After that step, then within the footerCallback function you can keep it simple by just summing the invisible column, because the condition has already been set when you get the list of rows onto the page
totalNettNo99 = api
.column(8, { page: 'current' }) //remember this is the last invisible column
.data()
.reduce(function (a, b) {
return intVal(a) + intVal(b);
});
You can then update your footer with that sum on the visible column 3 (index 2)
$(api.column(2).footer()).html(
'€' + totalNettNo99.toFixed(2)
);
Remember to set the invisble column this way in "columnDefs"
"ajax": {
"url": "/Reports/loadTransactionList",
"type": "POST",
"datatype": "JSON"
},
"columnDefs": [
{
"targets": [8],
"visible": false,
"searchable": false,
"render": false
}
],
"columns": [
{
"data": "convertDateToString"
},
{
"data": "convertTimeToString"
},
{
"data": "q_receiptnumber"
},
As you can see from the image, only the rows with Guinness Pint total the sum on the footer. Its a bit more typing but solves the problem if you have been tearing your hair with script solution.
Second Method
You can have a look at this answer here done purely in script and less typing that my solution
https://stackoverflow.com/a/42215009/7610106
credit to nkbved
Related
I'm searching through the documentation as best I can, I'm just on a time limit here so if someone can tell me that would be great.
I need to insert data into a column and have it push the data in the column down when it's inserted. For example I need to add the word "Good" at the top of the column, "Bad" WAS at the top, but when I pushed in "Good", "Bad" became the number two spot, the number two spot became the number three spot, etc. It needs to do this without deleting or moving the rows themselves because I'm reading data from two columns in the sheet and then writing to a third column.
Thanks in advance!
Welcome to StackOverflow.
From what I understood reading your question is that you have already been able to read data from two column and now you just want to store some of them in a separate column. Apologies if I misunderstood your question.
If I understood your question right, I would suggest you to create a list of the requests and make a batch update of it. Which would help you refraining yourself from reaching the write quota.
So, here how it goes-
request = []
request.append({
"updateCells": {
"rows": [
{
"values":[
{
"userEnteredValue": {
"numberValue": 546564 #Assuming your value is integer
},
"userEnteredFormat": {
"horizontalAlignment": "CENTER",
"verticalAlignment": "MIDDLE"
}
}
]
}
],
"fields": "*",
"range": {
#Replace these values with actual values
"sheetId": sheetId,
"startRowIndex": startRow, #Indexing start from 0
"endRowIndex": endRow,
"startColumnIndex": startColumn,
"endColumnIndex": endColumn,
}
}
})
#You can add more requests like this in the list and then execute
body = {
"requests": request
}
response = sheet.spreadsheets().batchUpdate(
spreadsheetId=spreadsheet_id,
body=body).execute()
#If you are using gspread, then you can use this
sheet.batch_update({"requests" : request})
This will update the cells with your given value. For detailed information and other formatting follow the documentation.
I'm getting Yahoo Finance data as a JSON file (via the YahooFinancials python API) and I would like to be able to parse the data in a smart way to feed my Google Sheet.
For this example, I'm interested in getting the "cash" variable under the "date" nested structure. But as you'll see, sometimes there is no "cash" variable under the first date, so I would like the script/formula to go and get the "cash" variable that's under the second date structure.
Here is sample 1 of JSON code:
{ "balanceSheetHistoryQuarterly": {
"ABBV": [
{
"2018-12-31": {
"totalStockholderEquity": -2921000000,
"netTangibleAssets": -45264000000
}
},
{
"2018-09-30": {
"intangibleAssets": 26625000000,
"capitalSurplus": 14680000000,
"totalLiab": 69085000000,
"totalStockholderEquity": -2921000000,
"otherCurrentLiab": 378000000,
"totalAssets": 66164000000,
"commonStock": 18000000,
"otherCurrentAssets": 112000000,
"retainedEarnings": 6789000000,
"otherLiab": 16511000000,
"goodWill": 15718000000,
"treasuryStock": -24408000000,
"otherAssets": 943000000,
"cash": 8015000000,
"totalCurrentLiabilities": 15387000000,
"shortLongTermDebt": 1026000000,
"otherStockholderEquity": -2559000000,
"propertyPlantEquipment": 2950000000,
"totalCurrentAssets": 18465000000,
"longTermInvestments": 1463000000,
"netTangibleAssets": -45264000000,
"shortTermInvestments": 770000000,
"netReceivables": 5780000000,
"longTermDebt": 37187000000,
"inventory": 1786000000,
"accountsPayable": 10981000000
}
},
{
"2018-06-30": {
"intangibleAssets": 26903000000,
"capitalSurplus": 14596000000,
"totalLiab": 65016000000,
"totalStockholderEquity": -3375000000,
"otherCurrentLiab": 350000000,
"totalAssets": 61641000000,
"commonStock": 18000000,
"otherCurrentAssets": 128000000,
"retainedEarnings": 5495000000,
"otherLiab": 16576000000,
"goodWill": 15692000000,
"treasuryStock": -23484000000,
"otherAssets": 909000000,
"cash": 3547000000,
"totalCurrentLiabilities": 17224000000,
"shortLongTermDebt": 3026000000,
"otherStockholderEquity": -2639000000,
"propertyPlantEquipment": 2787000000,
"totalCurrentAssets": 13845000000,
"longTermInvestments": 1505000000,
"netTangibleAssets": -45970000000,
"shortTermInvestments": 196000000,
"netReceivables": 5793000000,
"longTermDebt": 31216000000,
"inventory": 1580000000,
"accountsPayable": 10337000000
}
},
{
"2018-03-31": {
"intangibleAssets": 27230000000,
"capitalSurplus": 14519000000,
"totalLiab": 65789000000,
"totalStockholderEquity": 3553000000,
"otherCurrentLiab": 125000000,
"totalAssets": 69342000000,
"commonStock": 18000000,
"otherCurrentAssets": 17000000,
"retainedEarnings": 4977000000,
"otherLiab": 17250000000,
"goodWill": 15880000000,
"treasuryStock": -15961000000,
"otherAssets": 903000000,
"cash": 9007000000,
"totalCurrentLiabilities": 17058000000,
"shortLongTermDebt": 6024000000,
"otherStockholderEquity": -2630000000,
"propertyPlantEquipment": 2828000000,
"totalCurrentAssets": 20444000000,
"longTermInvestments": 2057000000,
"netTangibleAssets": -39557000000,
"shortTermInvestments": 467000000,
"netReceivables": 5841000000,
"longTermDebt": 31481000000,
"inventory": 1738000000,
"accountsPayable": 10542000000
}
}
]
}
}
The first date structure (under 2018-12-31) doesn't contain the cash variable. So I would like the Google sheet to go and search for the same data in 2018-09-30 and if not available go and search in 2018-06-30.
OR just scan the nested structure dates and fetch the first "cash" occurrence that will be found.
Basically, I would like to know how to skip the name of the date variable (i.e.2018-12-31) as it doesn't really matter, and just make the formula seek for the first available "cash" variable.
Main questions recap
How to skip mentioning an exact nested level name and scan what's
inside?
How to keep scanning until you find the desired variable with
a value that is not "null" (this can happen)?
What would be the entire formula to achieve the following logic: Scan the JSON file until you find the value > if no value found, fallback to this IMPORTXML function that calls an external API.
Let me know if you need more context about the issue and thanks in advance for your help :)
EDIT: this is the IMPORTJSON formula I use in the cell of the spreadsheet right now.
=ImportJSON("https://api.myjson.com/bins/8mxvi", "/financial/balanceSheetHistoryQuarterly/ABBV/2018-31-12/cash", "noHeaders")
Obviously, this one returns an error as there is nothing under that date. The JSON is also the valid link I use just now.
=REGEXEXTRACT(FILTER(
TRANSPOSE(SPLIT(SUBSTITUTE(A1, ","&CHAR(10), "×"), "×")),
ISNUMBER(SEARCH("*"&"cash"&"*",
TRANSPOSE(SPLIT(SUBSTITUTE(A1, ","&CHAR(10), "×"), "×"))))), ": (.+)")
=INDEX(ARRAYFORMULA(SUBSTITUTE(REGEXEXTRACT(FILTER(TRANSPOSE(SPLIT(SUBSTITUTE(
TRANSPOSE(IMPORTDATA("https://api.myjson.com/bins/8mxvi")), ","&CHAR(10), "×"), "×")),
ISNUMBER(SEARCH("*"&"cash"&"*", TRANSPOSE(SPLIT(SUBSTITUTE(
TRANSPOSE(IMPORTDATA("https://api.myjson.com/bins/8mxvi")), ","&CHAR(10), "×"), "×"))))),
":(.+)"), ",", "")), 1, 1)
I've had a hard time getting two columns from a CSV file with which I am planning on building a basic bar chart. I was planning on getting 2 arrays (one for each column) within one array that I would use as such to build a bar chart. Just getting started with D3, as you can tell.
Currently loading the data in gets me an array of objects, which is then a mess to get two columns of key-value pairs. I'm not sure my thinking is correct...
I see this similar question:
d3 - load two specific columns from csv file
But how would I use selections and enter() to accomplish my goal?
You can't load just 2 columns of a bigger CSV, but you can load the whole thing and extract the columns you want.
Say your csv is like this:
col1,col2,col3,col4
aaa1,aaa2,aaa3,aaa4
bbb1,bbb2,bbb3,bbb4
ccc1,ccc2,ccc3,ccc4
And you load it with
csv('my.csv', function(err, data) {
console.log(data)
/*
output:
[
{ col1:'aaa1', col2:'aaa2', col3:'aaa3', col4:'aaa4' },
{ col1:'bbb1', col2:'bbb2', col3:'bbb3', col4:'bbb4' },
{ col1:'ccc1', col2:'ccc2', col3:'ccc3', col4:'ccc4' }
]
*/
})
If you only want col2 and col3 (and you don't want to simply leave the other columns' data in there, which shouldn't be an issue anyway), you can do this:
var cols2and3 = data.map(function(d) {
return {
col2: d.col2,
col3: d.col3
}
});
console.log(cols2and3)
/*
output:
[
{ col2:'aaa2', col3:'aaa3' },
{ col2:'bbb2', col3:'bbb3' },
{ col2:'ccc2', col3:'ccc3' }
]
*/
I.e. the above code produced a new array of objects with only two props per object.
If you want just an array of values per column — not objects with both columns' values — you can:
var col2data = data.map(function(d) { return d.col2 }
var col3data = data.map(function(d) { return d.col3 }
console.log(col2) // outputs: ['aaa2', 'bbb2', 'ccc2']
Im trying to work out how to append a zero to a specific JSON decoded array value for multiple records stored in a MySQL table according to some conditions.
for example, for table 'menu', column 'params'(text) have records containing JSON decoded arrays of this format:
{"categories":["190"],"singleCatOrdering":"","menu-anchor_title":""}
and column 'id' has a numeric value of 90.
my goal is to add a zero to 'categories' value in menu.params whenever (for example) menu.id is under 100.
for this records the result being
{"categories":["1900"],"singleCatOrdering":"","menu-anchor_title":""}
so im looking for a SQL Query that will search and find the occurrences of "categories": ["999"] in the Database and update the record by adding a zero to the end of the value.
this answer is partially helpful by offering to use mysql-udf-regexp but its referring to REPLACE a value and not UPDATE it.
perhaps the REGEXP_REPLACE? function will do the trick. i have never used this library and am not familiar with it, perhaps there is an easier way to achieve what i need ?
Thanks
If I understand your question correctly, you want code that does something like this:
var data = {
"menu": {
"id": 90,
"params": {
"categories": ["190"],
"singleCatOrdering": "",
"menu-anchor_title": ""
}
}
};
var keys = Object.keys(data);
var columns;
for (var ii = 0, key; key = keys[ii]; ii++) {
value = data[key];
if (value.id < 100) {
value.params.categories[0] += "0";
alert(value.params.categories[0]);
}
}
jsFiddle
However, I am not using a regular expression at all. Perhaps if you reword the question, the necessity of a regex will become clearer.
I would like to sort by a field in a specific order, lets say 2,4,1,5,3.
In MySQL I could use ORDER BY FIELD(id,2,4,1,5,3).
Is there anything equivalent for ArangoDB?
I think it should be possible to use the POSITION AQL function, which can return the position of an element inside an array
FOR i IN [ 1, 2, 3, 4, 5 ] /* what to iterate over */
SORT POSITION([ 2, 4, 1, 5, 3 ], i, true) /* order to be returned */
RETURN i
This will return:
[ 2, 4, 1, 5, 3 ]
Update: my original answer included the CONTAINS AQL function, however, it should be POSITION!
Unfortunately, there is no direct equivalent for that, at the moment.
However, there are ways to accomplish that by yourself.
1) By constructing an AQL query:
The query would run through your sort value array and query the DB for every defined value. Each of those results would then be added to the final output array.
Mind you, that this does have a performance penalty, because there is one query for every value. If you are defining only a few ones, I guess it will be tolerable, but if you have to define for example tens or hundreds, it will lead to n+1 queries (where n is the number of custom sorted values).
The "+1" is the last query, which should get the result of all the other values, which are not defined in your custom sort array and also append these to your output array.
That would look like the following snippet, which you can copy into your AQL Editor and run it.
Notes for the snippet:
I am first creating an array, which would represent the collection we
would query.
Then I am setting the defined sort values.
After that, the actual AQL statement does its job.
Also, note the FLATTEN function at the outer RETURN statement, which is required, because in the first loop we are getting result arrays for each defined sort value. These have all to be flatten down to the same level in order to be processed as a unique result set (instead of many encapsulated small ones).
/* Define a dummy collection-array to work with */
LET a = [
{
"_id": "a/384072353674",
"_key": "384072353674",
"_rev": "384073795466",
"sort": 2
},
{
"_id": "a/384075040650",
"_key": "384075040650",
"_rev": "384075827082",
"sort": 3
},
{
"_id": "a/384077137802",
"_key": "384077137802",
"_rev": "384078579594",
"sort": 4
},
{
"_id": "a/384067504010",
"_key": "384067504010",
"_rev": "384069732234",
"sort": 1
},
{
"_id": "a/384079497098",
"_key": "384079497098",
"_rev": "384081004426",
"sort": 5
}
]
/* Define the custom sort values */
LET cSort = [5,3,1]
/* Gather the results of each defined sort value query into definedSortResults */
LET definedSortResults = (
FOR u in cSort
LET d = (
FOR docs IN `a`
FILTER docs.`sort` == u
RETURN docs
)
RETURN d
)
/* Append the the result of the last (all the non-defined sort values) query to the results of the definedSortResults into the output array */
LET output = (
APPEND (definedSortResults, (
FOR docs IN `a`
FILTER docs.`sort` NOT IN cSort
RETURN docs
)
)
)
/* Finally FLATTEN and RETURN the output variable */
RETURN FLATTEN(output)
2) A different approach would be, to extend AQL with a function written in JavaScript, that would essentially do the same steps as above.
Of course, you could also open up a feature request on ArangoDB's GitHub Page, and maybe the nice folks at ArangoDB will consider it for inclusion. :)
Hope that helps