firebase-import timing out? - json

I'm using the firebase-import tool to upload JSON data to my firebase. I consistently get the following error after trying to upload a JSON file that is ~40MB in size. Any ideas why? The output seems to be intentionally cryptic.
It always happens when the upload is 42% complete, and about that much of the JSON does get successfully uploaded to Firebase.
FIREBASE INTERNAL ERROR: Server Error: ClientId[------]:ErrorId[5]: Error on incoming message
Importing [==================== ] 42% (1610/3865)
/usr/local/lib/node_modules/firebase-import/node_modules/firebase/lib/firebase-node.js:44
function ac(a){try{a()}catch(b){setTimeout(function(){throw b;},Math.floor(0))
^
AssertionError: false == true
at onComplete (/usr/local/lib/node_modules/firebase-import/bin/firebase-import.js:222:7)
at /usr/local/lib/node_modules/firebase-import/node_modules/firebase/lib/firebase-node.js:128:47
at ac (/usr/local/lib/node_modules/firebase-import/node_modules/firebase/lib/firebase-node.js:44:20)
at X (/usr/local/lib/node_modules/firebase-import/node_modules/firebase/lib/firebase-node.js:128:22)
at /usr/local/lib/node_modules/firebase-import/node_modules/firebase/lib/firebase-node.js:121:291
at /usr/local/lib/node_modules/firebase-import/node_modules/firebase/lib/firebase-node.js:85:276
at md.h.bc (/usr/local/lib/node_modules/firebase-import/node_modules/firebase/lib/firebase-node.js:86:104)
at ad.bc (/usr/local/lib/node_modules/firebase-import/node_modules/firebase/lib/firebase-node.js:77:364)
at Q.Od (/usr/local/lib/node_modules/firebase-import/node_modules/firebase/lib/firebase-node.js:75:280)
at Ec (/usr/local/lib/node_modules/firebase-i

It turns out this particular error was due to a key path in the JSON exceeding Firebase's maximum key path length, which is 768 characters.

Related

How to correctly annotate a csv file for uploading into a bucket in InfluxDB

I am trying to evaluate InfluxDB as a real time, time series data visualization tool. I have an account with InfluDB and I have created a bucket for data storage. I now want to upload a csv file into the bucket via the click to upload feature but I keep getting errors associated with incorrect annotations. The last error I received was:
'Failed to upload the selected CSV: error in csv.from(): failed to read metadata: failed to read header row: wrong number of fields'
I have tried to decipher their docs and examples on how to annotate a csv file and have tried many different combinations of #datatype, #group and #default but nothing works.
This is the latest attempt that generated the error above.
#datatype,string,string,double,dateTime
#group,true,true,false,false
#default,,,,
_measurement,station,_value,_time
device,MBL,-0.814075542,1.65E+18
device,MBL,-0.837942395,1.65E+18
device,MBL,-0.862699339,1.65E+18
device,MBL,-0.891686336,1.65E+18
device,MBL,-0.891492408,1.65E+18
device,MBL,-0.933193098,1.65E+18
device,MBL,-0.933193098,1.65E+18
device,MBL,-0.976859072,1.65E+18
device,MBL,-0.981019863,1.65E+18
device,MBL,-1.011647128,1.65E+18
device,MBL,-1.017813258,1.65E+18
Any thoughts would be greatly appreciated. Thanks.
From the sample data above, I assume "device" is the name of a measurement and "MBL" is a tag whose name is station. Hence, there is 1 measurement and 1 tag, 1 field and a timestamp.
And you are mixing data types and line protocol elements when using annotated CSV. You could try following version:
#datatype,measurement,tag,double,dateTime
#default device,MBL,
thisIsYouMeasurementName,station,thisIsYourFieldKeyName,time
device,MBL,-0.814075542,1652669077000000000
device,MBL,-0.837942395,1652669077000000001
device,MBL,-0.862699339,1652669077000000002
device,MBL,-0.891686336,1652669077000000003
device,MBL,-0.891492408,1652669077000000004
device,MBL,-0.933193098,1652669077000000005
device,MBL,-0.933193098,1652669077000000006
device,MBL,-0.976859072,1652669077000000007
device,MBL,-0.981019863,1652669077000000008
device,MBL,-1.011647128,1652669077000000009
device,MBL,-1.017813258,1652669077000000010
Note that time column should avoid using "1.65E+18". See more details here.

Why does these two WFS layers fail to render in QGIS?

I tried to download the layers "River U Lancang","River Maintrib", and "Main River Line" in the following WFS server: https://geo.mrcmekong.org/geoserver/wms?service=wfs&request=GetCapabilities
Every other layer works, but these three layers throw back the following error:
2022-03-08T16:29:54 WARNING Error when parsing GetFeature response : Error: not well-formed (invalid token) on line 1, column 720
2022-03-08T16:29:54 WARNING Retrying request https://geo.mrcmekong.org/geoserver/wms?SERVICE=WFS&REQUEST=GetFeature&VERSION=1.0.0&TYPENAME=mrc:River U Lancang&SRSNAME=EPSG:32648&OUTPUTFORMAT=GML3: 2/3
2022-03-08T16:29:56 WARNING Error when parsing GetFeature response : Error: not well-formed (invalid token) on line 1, column 720
2022-03-08T16:29:56 WARNING Retrying request https://geo.mrcmekong.org/geoserver/wms
The response is using an invalid namespace mrc:River U Lancang gml:id="River U Lancang.1"> - You can't use spaces here. I think the only solution is to contact the owner of the service and ask them to fix this.

Rvest parse.response 404 error is breaking my for loop. How to keep looping?

I have the following section of code nested within a for loop
mywebsite <- html(webstring)
cast <- html_nodes(mywebsite,".some-node")
text_of_cast<-html_text(cast)
The problem is that one of the URLs returns the following error:
Error in parse.response(r, parser, encoding = encoding) : client
error: (404) Not Found
And this error breaks my for loop because the page doesn't exist.
Is it possible to ignore this error and keep looping?
It sounds that the "try" function from the base package is what you are looking for.

R - Twitter Extraction - Error in .subset2(x, i, exact=exact)

I am making an R-script to get all of the mentions (#username) of a specific set of users.
My first issue isn't a big deal. I try to work at home, as well as work. At work, the code works fine. At home, I get Error 32 - Could not authenticate you from Oauth. This is using the exact same code, key, secret, token. I have tried resetting my secret key/token, same thing. Not a problem, since I can do remote login, but its frustrating.
The REAL issue here...
I construct a URL (ex: final_url = "https://api.twitter.com/1.1/search/tweets.json?q=#JimFKenney&until=2015-10-25&result_type=recent&count=100")
Then I search twitter for my query of #usernameDesired to get all the comments where they were mentioned.
mentions = GET(final_url, sig)
This works fine, but then I want my data in a usable format so I do...
library(rjson)
#install.packages("jsonlite", repos="http://cran.rstudio.com/")
library(jsonlite)
#install.packages("bit64", repos="http://cran.rstudio.com/")
json = content(mentions)
I then get the following error -
$statuses
Error in .subset2(x, i, exact = exact) : subscript out of bounds
I don't have even the first idea of what can be causing this.
Any help is gratly appreciated.
EDIT 1: For Clarity, I get the error when trying to see what is in json. If I do "json = content(mentions)" that line of code executes fine. I then type "json" to see what is in the variable, and I get the above error that starts with $statuses.

Jmeter CSV issue

Please help me with following issue:
I have a simple Jmeter test with where variables are stored in CSV file. There is only one request in the test:
Get .../api/${page} , where ${page} is a variable from CSV
Everything goes well with thread properties for ex. 10 threads x30 loop count
If i increase any parameter, for ex. in 10x40 or 15x30, i receive at least one error and looks like this is jmeter issue:
one request (random) isn't able to take variable from CSV and i got an error:
-.../api/page returns 404 error
So the question is - is there any limit in jmeter's connection to CSV file?
Thanks in advance.
A key point to focus on is the way your application manage the case when 2 different users require the same page.
There are few checks that I would recommend:
be sure that the "Recycle on EOF" property is true
be sure that you have more lines on CSV than the number of threads you are firing
use a "View result tree" controller to investigate the kind of error you are getting
Let us know