youtube-dl: Failed to parse JSON - json

I posted a question on this yesterday to the GitHub support page and it got flagged as a duplicate - the original answer is here. This issue doesn't automatically fix like it did for that user. Instead, it seems to come and go with no pattern, so I don't have a good way to replicate it. Some songs will work at one point in time, then they won't a couple minutes later.
Error:
[debug] Encodings: locale cp1252, fs utf-8, out UTF-8, pref cp1252
[debug] youtube-dl version 2020.09.20
[debug] Python version 3.7.8 (CPython) - Windows-10-10.0.19041-SP0
[youtube:search] query "iron man 3 song": Downloading page 1
[debug] exe versions: none
[debug] Proxy map: {}
ERROR: query "song name": Failed to parse JSON caused by JSONDecodeError('Expecting value: line 1 column 1 (char 0)')); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
I get the issue when attempting to extract the data from the video. Here is a snippet of the code I am using:
ydlOps = {
'format': 'bestaudio/best',
'outtmpl': './%(title)s.webm',
'noplaylist': True,
'extractaudio': True,
'audioformat': 'webm',
'default_search': 'ytsearch1',
'quite': True,
'verbose':True,
'version': True
}
with youtube_dl.YoutubeDL(ydlOps) as downloader:
songData = downloader.extract_info(url, download=download)
I have changed the options, tried other options said to have worked, and nothing seems to make a difference. Some will work, then not, then they will again.

I think this is a youtube-dl bug. I wrote a parser for youtube searches and it also broke.
When looking previously at the response from youtube, all the JSON data was stored like this:
window["ytInitialData"] = {...}
So you just had to search through the server's response for the string 'window["ytInitialData"]' to find the relevant JSON and extract it. But now, youtube stores the JSON like this inside the html file sent by the server
var ytInitialData = {...}
This needs to be changed on youtube-dl's part when parsing the results.
What's strange is that sometimes youtube uses the previous version and sometimes it uses the current one. I think it's because the change in javascript is progressively being rolled out accross all youtube servers.
Also note that now, the line containing all the JSON ends with '; ' instead of just ';'. This might also require a change from youtube-dl.
You need to submit a pull request to youtube-dl or wait for somebody to fix it.

Related

chrome.downloads.download never starts, but no errors

I'm trying to create a Chrome ext that uses the chrome.downloads API. But it doesn't seem to... do anything. First time Chrome developing so I'm a bit unfamiliar with the environment, but no one else online seems to have this problem.
Searching for the issue, I've minimized my code to the following (using the jQuery code as a test download, but have tried many others):
chrome.downloads.download({
url: "https://code.jquery.com/jquery-3.6.0.min.js",
}, (dlid) => {
chrome.downloads.search({id: dlid}, (dl) => {
console.log(dl[0]);
});
});
This results in the following DownloadItem being printed in the console:
byExtensionId: [redacted for privacy]
byExtensionName: [redacted for privacy]
bytesReceived: 0
canResume: false
danger: "safe"
exists: true
fileSize: 0
filename: ""
finalUrl: "https://code.jquery.com/jquery-3.6.0.min.js"
id: 2675
incognito: false
mime: "application/javascript"
paused: false
referrer: ""
startTime: "2021-09-04T09:18:02.401Z"
state: "in_progress"
totalBytes: 0
url: "https://code.jquery.com/jquery-3.6.0.min.js"
But the file doesn't download, nor does anything happen to indicate even an attempt at it. Continued readings of the DownloadItem always gives the same result: stating that the download is in progress and 0 bytes has been downloaded so far. And nothing else on my system gives any indication that the file is being downloaded. But never does it result in any errors - instead it just sits seemingly idling.
Here's what I've ruled out to be the cause so far:
Permission is declared in manifest.json: "permissions": [..., "downloads", ...]. Removing it and running the same code throws errors.
Context is correct, this is not run in a content script. Trying to do so throws errors.
Playing around with any of the DownloadOptions parameters has no effect at all. DownloadOptions.saveAs opens no dialog, and DownloadOptions.filename has no effect on the DownloadItem.filename which is always an empty string.
I've tried many other download targets, both different domains and different filetypes. The resulting mime parameter updates according to the URL extension, but otherwise it creates the same error-free perpetual idling.
I've even tried fake URLs with the same outcome - invalid URL syntaxes throws errors, but nonexistent URLs just gives the same idle result.
There's no evident Internet connectivity issues with my system, and I don't use any proxy or VPN.
Running it on another machine, albeit with the same Chrome user login and the same Internet connection. Same result.

JSZip read downloaded data (Angular 2)

I am trying to use JSZip to unzip a JSON file but due to my lack of understanding how JSZip works I get the response in a format that I do not know how to use.
So far this is my code:
this.rest.getFile(this.stlLocation).subscribe(
data => {
let JSONFIle = new JSZIP();
JSONFIle.file(data.url, data._body, {binary : true, compression : 'DEFLATE'});
console.log(JSONFIle);
},
err => {
this.msgs.push({severity: 'error', summary: 'Error Message', detail: err});
}
);
So I download a file using an angular 2 service and I use an observable to get the response. When the data is received I finally call JSZip and try to unzip the file but the result of the operation is an intricate object with my data scattered all over the place and buried inside several layers. All I want is the unzipped JSON file that I can open and process.
Thank you for your help,
Dino
after a bit of reading I have realized I was going on the wrong path. If you are downloading the file to a browser, you shouldn't have to do anything. Browsers add the Accept-Encoding: 'deflate' header automatically; it is both unnecessary and not good practice to do this at a DOM/JS level. If you are using NGINX the following link may help you out:
NGINX COMPRESSION AND DECOMPRESSION

503 Service Unable: No registered leader was found after waiting for 4000 ms

I've recently started using solr. I'm using the latest Solr v6.1.0. I followed the quick start tutorial to get a feel of it. Being, a windows user I had to resort to the other way of importing my .csv data using Post tool for Windows
I am primarily interested in seeing how Solr can handle and search large data sets like the one I have. It is a 522 MB my_db.csv file which properly formatted (ran various python scripts to check that).
I started the solr cloud by the usual procedure. Then, I imported a part of this dataset (to be specfic, 29 lines of my_db.csv) to see if it works.
Shell:
C:\Users\MAC\Downloads\solr-6.1.0\solr-6.1.0>java -Dc=gettingstarted -Ddata=files -Dauto=yes -jar example\exampledocs\post.jar example\exampledocs\29lines.csv
Result was:
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
POSTing file 29lines.csv (text/csv) to [base]
1 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/gettingstarted/update...
Time spent: 0:01:28.106
Fortunately, it worked perfectly and I was able to use the default velocity search wrapper that they provide by going to http://localhost:8983/solr/gettingstarted_shard2_replica1/browse . It had all my data stored so far. 29 rows to be precise.
Now, I wanted to see if the whole 522 MB of data would be imported for which I used the same command (just replaced the .csv file, ofcourse) and then I run it. I did expect it to take a while, and after nearly 10 minutes it had inserted around 32,674 out of 1,300,000 and then it threw out this error.
Result was:
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
POSTing file omdbFull.csv (text/csv) to [base]
SimplePostTool: WARNING: Solr returned an error #503 (Service Unavailable) for url: http://localhost:8983/solr/gettingstarted/update
SimplePostTool: WARNING: Response: <?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">503</int><int name="QTime">128191</int></lst><lst name="error"><lst name="metadata"><str name="error-cla
ss">org.apache.solr.common.SolrException</str><str name="root-error-class">org.apache.solr.common.SolrException</str></lst><str name="msg">No register
ed leader was found after waiting for 4000ms , collection: gettingstarted slice: shard2</str><int name="code">503</int></lst>
</response>
SimplePostTool: WARNING: IOException while reading response: java.io.IOException: Server returned HTTP response code: 503 for URL: http://localhost:89
83/solr/gettingstarted/update
1 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/gettingstarted/update...
Time spent: 0:08:36.342
Summary
This was strange. I wasn't exaclty sure why this had happened. Is it perhaps that I have to change some kind of a "timeout" parameter for it to commit in? Unfortunately I wasn't able to see any such option for the windows post tool.
I found the solution to my problem. The problem wasn't that the file was huge. Which in my case was around 500 MB csv. I'm sure it will go through for even larger files.
The thing is, I think Solr has some kind of auto recognizing the kind of values are input in an index. For instance, my CSV had a column with "Years" like "2015","2014","1970"... etc. but when this column also had improper years which I didn't know, like "2014-2015", "1980-1988".
Solr would stop and throw out an exception because this was not a year but an year range. It wasn't expecting a value of this sort.
Summary
To fix the problem, I simply filtered out the faulty year rows and volla! it processed my 500 MB csv in around 15 minutes. After thatm I had a good nice database ready to be searched!

Elasticsearch does not return jsonp

im trying to connect my polymer element to my own elasticsearch-server.
My first problem was, that they are on two different ports, so it had to choose JSONP because of Cross-Domain problems.
So I found out, that I just have to add
http.jsonp.enable: true
in the elasticsearch.yml.
Im starting the server simply by executing the "elasticsearch.bat".
I've indexed data.
If I try to load the API via iron-jsonp-library, im always getting an unexpected token error.
<iron-jsonp-library id="libraryLoader"
library-url="http://127.0.0.1:9200/data/_search?pretty%%callback%%"
notify-event="api-load"
callbackName="jsonpCallback">
</iron-jsonp-library>
In Google Chrome, I'm getting following result from elasticsearch
{"took":2,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":5,"max_score":1.0,"hits":[{"_index":"data","_type":"data","_id":"5","_score":1.0,"_source":{"id":5,"name":"Meyr","manufacturer":"Meyr","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Meyr"}},{"_index":"data","_type":"data","_id":"2","_score":1.0,"_source":{"id":2,"name":"Meier","manufacturer":"Meier","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Meier"}},{"_index":"data","_type":"data","_id":"4","_score":1.0,"_source":{"id":4,"name":"Mair","manufacturer":"Mair","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Mair"}},{"_index":"data","_type":"data","_id":"1","_score":1.0,"_source":{"id":1,"name":"Maier","manufacturer":"Maier","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Maier"}},{"_index":"data","_type":"data","_id":"3","_score":1.0,"_source":{"id":3,"name":"Mayr","manufacturer":"Mayr","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Mayr"}}]}}
Due to some internet knowledge of JSONP, its not jsonp.
Why is my elasticsearch server, not formatting right?
Are you prior to v2.0? Looks like they removed jsonp in 2.0 (elastic.co/guide/en/elasticsearch/reference/2.2/…).
Alsopretty%%callback%% doesn't look right, the %%callback%% macro usually needs to be the value of name (like onload=%%callback%%). The element replaces %%callback%% with the name of a global function that is generated for you.

Jmeter posting json application

Hello I have got problem I have configured my Jmeter to test performance between play framework version 2.2 and 2.3 to check the speed of controllers but when I post something (here im creating account) in play console I see that account is creating but its not saving in my database, when I have done this same from another laptop by typing my IP Adress it saved it to database and worked good. I dont know where could be the problem cause configuration is the same.
In Jmaker in thread I have made
HTTP Head Manager and i added here
Name: Content-Type Value: application/json
I have made too HTTP Request
Server Adress: localhost Port:9000 Method: Post Content encoding: UTF-8
Path: api/accounts Implementation: HTTPClient 4 Portocol [http]: http
And in parameters I added
name:(nothing),
Value:{"username":"AccountTest1","password":"test6ccou49","email":"AccountTest1#dev.null"}
and I have turned only Include Equals
I added to Constant Timer and jp#gc Response Times over time from plugin which is making chart
I dont know why when I press Start: chart is building, in play console I see account creating but this is not saving to database.
I will be very thankful for every help.
Create a CSV file like below sample data:
AccountTest1,test6ccou49,AccountTest1#dev.null
AccountTest2,test6ccou50,AccountTest2#dev.null
AccountTest3,test6ccou51,AccountTest3#dev.null
..... so on
Add->Config Element->CSV Data Set Config
keep settings as mentioned below:
Filename: "Give complete file path with file name e.g. c:\folder1\folder2\file.csv"
File Encoding: "Leave Blank"
Variable Names: "user_name,pass_wd,e_mail"
Delimiter: ","
Allow Quoted Data: "false"
Recycle on EOF: "True"
Stop Thread on EOF: "false"
Sharing mode: "All threads"
Now Post body should look like
{"username":"${user_name}","password":"${pass_wd}","email":"${e_mail}"}
Hope this will help.
Try to use "Post Body" in the HTTP Sampler to post below values
{"username":"AccountTest1","password":"test6ccou49","email":"AccountTest1#dev.null"}
Hope this will help.
I have repaired it already, the problem was in Http Header :), I have got question, now I'm creating accounts by requesting
{"username":"AccountTest1","password":"test6ccou49","email":"AccountTest1#dev.null"}
But this create one account, I was testing some solutions on the Internet but them don't work also is here any way to make a lot of accounts in one time to test performance?
My create account function gets json object {xxx:yyy,bbb:ccc}