PathSegList is deprecated and removed in Chrome 48 - html

In Chrome 48, PathSegList is removed. And as I read in the answers to another question "Alternative for deprecated SVG pathSegList", Chrome is providing a new API, but I guess this new API is not yet available. What is another alternative and how can I use it. I know this is duplicate, but the link I mentioned is not helping me.

You do not need path seg polyfill (pathSeg.js).
With path data polyfill, you can edit path data as a common array object.
Use path data polyfill to work with new API. It's recommended.
var path = document.querySelector('path'); //your <path> element
//Be sure you have added the pathdata polyfill to your page before use getPathData
var pathdata = path.getPathData();
console.log(pathdata);
/*
you will get an Array object contains all path data details
like this:
[
{ "type": "M", "values": [ 50, 50 ] },
{ "type": "L", "values": [ 200, 200 ] }
]
*/
//replacement for createSVGPathSegMovetoRel and appendItem
pathdata.push({type:'m', values:[200,100]});
path.setPathData(pathdata);
//replacement for createSVGPathSegMovetoAbs and appendItem
pathdata.push({type:'M', values:[300,120]});
path.setPathData(pathdata);
//replacement for createSVGPathSegLinetoAbs and appendItem
pathdata.push({type:'L', values:[400,120]});
path.setPathData(pathdata);
console.log(path.getAttribute('d'));
//create a new path data array
var pathdata = [
{ "type": "M", "values": [ 50, 50 ] },
{ "type": "L", "values": [ 200, 200 ] }
];
path.setPathData(pathdata);
console.log(path.getAttribute('d'));

Related

Couchbase: How to remove channel access for a document through Sync function

I'm new to couchbase. I want to update channels for some of the documents through sync functions. But right now, it is not updating but adding an extra channel to the document's meta but not removing the existing channel. Can anyone suggest how I can remove the existing channel in the document?
Sync Function:
function(doc, oldDoc) {
//....
if (doc.docType === "differentType") {
channel("differentChannel");
expiry(2332800);
return;
}
//.......
}
Document:
{
"channels": [
"abcd"
],
"docType": "differentType",
"_id" : "asjnc"
}
Metadata:
{
"meta": {
"id": "asjnc",
"rev": "64-1b500000000",
"expiration": 1650383285,
"flags": 0,
"type": "json"
},
"xattrs": {
"_sync": {
"rev": "1-db30e607872",
"sequence": 777,
"recent_sequences": [
777
],
"history": {
"revs": [
"1-db30e607872"
],
"parents": [
-1
],
"channels": [
[
"differentChannel"
]
]
},
"channels": {
"differentChannel": null
}
}
}
}
Expectation of the document with the same metadata:
{
"channels": [ ], // <--- no channels
"docType": "differentType",
"_id" : "asjnc"
}
With this sync function, for the document of type differentType, the channel differentChannel is set in the xattrs section in the metadata. But the channel that was added earlier from the couchbaseLite is not getting removed. Can anyone help?
I answered this in the Couchbase Forums: https://forums.couchbase.com/t/remove-channels-from-a-document/33212
The "channels" property in a document is counter-intuitively not describing what channels the document is currently in - it's just a user-definable field that happens to be the default routing for channels if you don't specify a sync function. It's up to the writer of the document what it should contain.
If you have another means of channel assignments (like "docType" in your case), then you don't need to specify "channels" in the document. The sync metadata shows that the document is in "differentChannel" at revision 1-db30e607872 but the contents of the document can be arbitrary.

Update JSON object numbers in ascending order in sequence

I am having problem with my NFT JSON files. Because of lack of facilities I have generated my 10k NFT collection as 10*1000, now I have ten collections (each 1000) instead of a single collection (of 10000). The JSON objects of each collection are numbered from 1-1000. But I want to copy all JSON objects into a single file and update their "edition" numbers in sequence from 1-10000.
Thanks in advance.
Here is the NFT metadata code.
"file_path": "ipfs://NewUriToReplace/1.png",
"nft_name": "NFT #1",
"external_link": "",
"description": "NFT Description",
"collection": "Collection Name",
"properties": [
{
"type": "type",
"name": "name"
},
{
"type": "type",
"name": "name"
},
{
"type": "type",
"name": "name"
},
{
"type": "type",
"name": "name"
},
{
"type": "type",
"name": "name"
}
],
"levels": [],
"stats": [],
"unlockable_content": [],
"explicit_and_sensitive_content": false,
"supply": 1,
"blockchain": "Polygon",
"price": 0.005,
"quantity": 1,
"dna": "a2fc94a3a51a7c853c01b553019628907f437d2a",
"edition": 1,
"date": 1642499902138,
"creator": "Artist",
"seller_fee_basis_points": 250,
"address": "0x2c41a4e7d9321b1134b076bb0be866709fda6ffb",
"share": 100,
"Date": "January 2022",
"compiler": "HashLips Art Engine"
}```
You coud, for instance, use php where you have to read each file, append it to an array, edit the array then dump it to a file:
simple php code to deal with json
$arr = [];
//scroll trough files (you can use libraries to better do this)
for($i=0; $i<100; ++i)
array_push($arr, json_decode(file_get_contents("file$i.json"),true) );
//here $arr contains all the 100 jsons, you can access it as a normal associative array and you can
//do your stuff
$arr[100]['esition'] = 'my_edition';
//dump the associative array as to a single file formatted as json
file_put_contents("outfile.json",json_encode($arr));
Found the solution through the HashLips art engine's main.js settings. If you want to generate your nft collection not in one step, but rather in many steps, for example 10*1000 instead of 10000. Make these on the HashLips Art Engine's main.js file's this part. In the third and seventh lines of this code replace 1 with starting number you want to start generating from.
Here you can see I have replaced the 1s with 1001, so generation starts from 1001, not 1.
let layerConfigIndex = 0;
let editionCount = 1001;
let failedCount = 0;
let abstractedIndexes = [];
for (
let i = network == NETWORK.sol ? 0 : 1001;
i <= layerConfigurations[layerConfigurations.length - 1].growEditionSizeTo;
i++
) {
abstractedIndexes.push(i);
}```

JSON document inserted as binary object in Couchbase

I'm trying to insert a java POJO into the couchbase store and the json just below the cas call looks like this -
{
"key": "sampleKey",
"myMap": {
"Messages": [
{
"field": "f1",
"label": "l1"
},
{
"field": "f2",
"label": "l2"
},
{
"field": "f3",
"label": "l3"
},
{
"field": "f4",
"label": "l4"
}
],
"Orders": [
{
"field": "f1",
"label": "l1"
},
{
"field": "f2",
"label": "l2"
},
{
"field": "f3",
"label": "l3"
},
{
"field": "f4",
"label": "l4"
},
{
"field": "f5",
"label": "l5"
}
]
}
}
I have verified that this is a valid JSON and it's still being inserted as binary object as I try to look up this document via couchbase GUI, it shows up the base64 encoded string. A couple of other documents are fine though. I am wondering if this is happening only for the cas method and not set.
The relevant java code is this:
String myJson = objectMapper.writeValueAsString(cacheObject);
CASResponse response = couchbaseClient.cas(cacheObject.getKey(), casValue.getCas(), myJson, PersistTo.MASTER);
// Java pojo
public class CacheObject
{
private String key;
private Map<String, List<FieldLabel>> myMap = new HashMap<String, List<FieldLabel>>();
// setters and getters
}
Any pointers on why this could be happening will be appreciated.
Update1: I'm using Couchbase java client version 1.4.4, server's 2.5
Update2: I don't think this has to do with my code or json, I tried replacing my json with a large json document (a valid one) and I saw the same result in the couchbase GUI. I think this's happening because size of the document may go over 2.5KB. The json I pasted above has the actual field and labels removed, they are slightly longer strings.
Strangely, when I modify my document, documents below 960 characters generally show up as Json, however slightly above ones are stored as binary.
If the size of the document is above 2.5KB the document will not be editable in console and this value can be changed in a file called documents.js

How to open a .mongo file and export the content into csv?

EDIT 2014-05-01: I tried fromJSON first (as suggested below), but that only parsed the first line. I found out that there were commas missing between the brackets of each JSON line so I changed that in TextEdit and saved the file. I also added [ at the beginning of the file and ] at the end and then it worked with JSON. Now the next step: from a list (with embedded lists) to a dataframe (or csv).
I get a data package from edX every now and then on the courses we are evaluating. Some of these are just plain .csv files which are quite easy to handle, others are more difficult for me (not having a CS or programming background).
I have 2 files I want to open and parse into csv files for analysis in R. I have tried many many json2csv tools out there, but to no avail. I also tried the simple methods described here to turn json into csv.
The data is confidential, so I cannot share the entire data set, but will share the first two lines of the file, maybe that helps. The problem is that nowhere I find anything about .mongo files, which to me seems quite strange, do they even exist? Or is this just a JSON file that may be corrupted (which could explain the errors)?
Any suggestions are welcome.
The first 2 lines in one of the .mongo files:
{
"_id": {
"$oid": "52d1e62c350e7a3156000009"
},
"votes": {
"up": [
],
"down": [
],
"up_count": 0,
"down_count": 0,
"count": 0,
"point": 0
},
"visible": true,
"abuse_flaggers": [
],
"historical_abuse_flaggers": [
],
"parent_ids": [
],
"at_position_list": [
],
"body": "the delft university accredited course with the scholarship (fundamentals of water treatment) is supposed to start in about a month's time. But have the scholarship list been published? Any tentative date??",
"course_id": "DelftX/CTB3365x/2013_Fall",
"_type": "Comment",
"endorsed": false,
"anonymous": false,
"anonymous_to_peers": false,
"author_id": "269835",
"comment_thread_id": {
"$oid": "52cd40c5ab40cf347e00008d"
},
"author_username": "tachak59",
"sk": "52d1e62c350e7a3156000009",
"updated_at": {
"$date": 1389487660636
},
"created_at": {
"$date": 1389487660636
}
}{
"_id": {
"$oid": "52d0a66bcb3eee318d000012"
},
"votes": {
"up": [
],
"down": [
],
"up_count": 0,
"down_count": 0,
"count": 0,
"point": 0
},
"visible": true,
"abuse_flaggers": [
],
"historical_abuse_flaggers": [
],
"parent_ids": [
{
"$oid": "52c63278100c07c0d1000028"
}
],
"at_position_list": [
],
"body": "I got it. Thank you!",
"course_id": "DelftX/CTB3365x/2013_Fall",
"_type": "Comment",
"endorsed": false,
"anonymous": false,
"anonymous_to_peers": false,
"parent_id": {
"$oid": "52c63278100c07c0d1000028"
},
"author_id": "2655027",
"comment_thread_id": {
"$oid": "52c4f303b03c4aba51000013"
},
"author_username": "dmoronta",
"sk": "52c63278100c07c0d1000028-52d0a66bcb3eee318d000012",
"updated_at": {
"$date": 1389405803386
},
"created_at": {
"$date": 1389405803386
}
}{
"_id": {
"$oid": "52ceea0cada002b72c000059"
},
"votes": {
"up": [
],
"down": [
],
"up_count": 0,
"down_count": 0,
"count": 0,
"point": 0
},
"visible": true,
"abuse_flaggers": [
],
"historical_abuse_flaggers": [
],
"parent_ids": [
{
"$oid": "5287e8d5906c42f5aa000013"
}
],
"at_position_list": [
],
"body": "if u please send by mail \n",
"course_id": "DelftX/CTB3365x/2013_Fall",
"_type": "Comment",
"endorsed": false,
"anonymous": false,
"anonymous_to_peers": false,
"parent_id": {
"$oid": "5287e8d5906c42f5aa000013"
},
"author_id": "2276302",
"comment_thread_id": {
"$oid": "528674d784179607d0000011"
},
"author_username": "totah1993",
"sk": "5287e8d5906c42f5aa000013-52ceea0cada002b72c000059",
"updated_at": {
"$date": 1389292044203
},
"created_at": {
"$date": 1389292044203
}
}
R doesn't have "native" support for these files but there is a JSON parser with the rjson package. So I might load my .mongo file with:
myfile <- "path/to/myfile.mongo"
myJSON <- readLines(myfile)
myNiceData <- fromJSON(myJSON)
Since RJson converts into a data structure that fits the object being read, you'll have to do some additional snooping but once you have an R data type you shouldn't have any trouble working with it from there.
Another package to consider when parsing JSON data is jsonlite. It will make data frames for you so you can write them to a csv format with write.table or some other applicable method for writing objects.
NOTE: if it is easier to connect to the MongoDB and get the data from a request, then RMongo may be a good bet. The R-Bloggers also made a post about using RMongo that has a nice little walkthrough.
I used RJSON as suggested by #theWanderer and with the help of a colleague wrote the following code to parse the data into columns, choosing the specific columns that are needed, and checking each of the instances if they return the right variables.
Entire workflow:
Checked some of the data in jsonlint - corrected the errors → },{ instead of }{ between each line and [ and ] at the beginning and end of the file
Made a smaller file to play with, containing about 11 JSON lines
Used the code below to parse the datafile - however, checking the different listItems first if they are not lists themselves (that gives problems) // as you will see, I also removed things like \n because that gave errors and added an empty value for parent_id if there is none in the data (otherwise it would mix up the data)
The code to import the .mongo file into R and then parse it into CSV:
library(rjson)
###### set working directory to write out the data file
setwd("/your/favourite/dir/json to csv/")
#never ever convert strings to factors
options(stringsAsFactors = FALSE)
#import the .mongo file to R
temp.data = fromJSON(file="temp.mongo", method="C", unexpected.escape="error")
file.remove("temp.csv") ## removes the old datafile if there is one
## (so the data is not appended to the file,
## but a new file is created)
listItem = temp.data[[1]] ## prepare the listItem the first time
for (listItem in temp.data){
parent_id = ""
if (length(listItem$parent_id)>0){
parent_id = listItem$parent_id
}
write.table(t(c(
listItem$votes$up_count, listItem$visible, parent_id,
gsub("\n", "", listItem$body), listItem$course_id, unlist(listItem["_type"]),
listItem$endorsed, listItem$anonymous, listItem$author_id,
unlist(listItem$comment_thread_id), listItem$author_username,
as.POSIXct(unlist(listItem$created_at)/1000, origin="1970-01-01"))), # end t(), c()
file="temp.csv", sep="\t", append=TRUE, row.names=FALSE, col.names=FALSE)
}

JSON format problem when using Factual geo data

I'm using Factual api to fetch location data. Their restful service return data in JSON format as follow, but they are not using "usual" JSON format. There's no attribute key, instead, there's a “fields” that explains all the field keys.
So the question is how to retrieve the attribute I need? Please give an example if possible. Thanks in advance.
{
"response": {
"total_rows": 2,
"data": [
[
"ZPQAB5GAPEHQHDy5vrJKXZZYQ-A",
"046b39ea-0951-4add-be40-5d32b7037214",
"Hanko Sushi Iso Omena",
60.16216,
24.73907
],
[
"2TptHCm_406h45y0-8_pJJXaEYA",
"27dcc2b5-81d1-4a72-b67e-2f28b07b9285",
"Masabi Sushi Oy",
60.21707,
24.81192
]
],
"fields": [
"subject_key",
"factual_id",
"name",
"latitude",
"longitude"
],
"rows": 2,
"cache-state": "CACHED",
"big-data": true,
"subject_columns": [
1
]
},
"version": "2",
"status": "ok"
}
If you know the field name, and the data isn't guaranteed to stay in the same order, I would do a transform on the data so I can reference the fields by name:
var fieldIndex = {}
for (key in x.response.fields)
{
fieldIndex[x.response.fields[key]] = key;
}
for (key in x.response.data)
{
alert(x.response.data[key][fieldIndex.name]);
}
// Field map
var _subject_key = 0,
_factual_id = 1,
_name = 2,
_latitude = 3,
_longitude = 4;
// Example:
alert(_json.response.data[0][_factual_id]);
Demo: http://jsfiddle.net/AlienWebguy/9TEJJ/
I work at Factual. Just wanted to mention that we've launched the beta of version 3 of our API. Version 3 solves this problem directly, by including the attribute keys inline with the results, as you would hope. (Your question applies to version 2 of our API. If you're able to upgrade to version 3 you'll find some other nice improvements as well. ;-)