On Spotify Developer there is a description of the JSON format that is returned on "Get Audio Analysis for a Track". However, there is no information on "track.codestring", "track.echoprintstring" and "track.rhythmstring". Anyone who knows the definition of the information that is hidden in these long strings?
I'm currently embarking on machine/deep learning applied for Music Information Retrieval.
There seems to be no way to directly adress this question on developer.spotify. So I roamed the web but couldn't find an answer.
This is in the JSON-example on 'https://developer.spotify.com/documentation/web-api/reference/tracks/get-audio-analysis/'
"codestring": "eJxVnAmS5DgOBL-ST-B9_P9j4x7M6qoxW9tpsZQSCeI...",
"code_version": 3.15,
"echoprintstring": "eJzlvQmSHDmStHslxw4cB-v9j_A-tahhVKV0IH9...",
"echoprint_version": 4.12,
"synchstring": "eJx1mIlx7ToORFNRCCK455_YoE9Dtt-vmrKsK3EBsTY...",
"synch_version": 1,
"rhythmstring": "eJyNXAmOLT2r28pZQuZh_xv7g21Iqu_3pCd160xV...",
"rhythm_version": 1
This document for EchoNest seems to be describing the same properties as the Spotify API returns. (For an older version unfortunately)
Analyzer Documentation
I also recommend checking out kaleidosync a visualisation app based on Spotify/EchoNest.
kaleidosync demo
source on github
I'm super late to the party here, but I'll share my findings in case they help someone else. The below is from what looks like an archived version of the original Echo Nest Analyzer Documentation (v3.2). I've extracted a bit of it below and have provided a link to where I was able to browse the document.
Output Data
track data
codestring, echoprintstring: these represent two different audio fingerprints computed on the audio and are used by other Echo Nest services for song identification.
synchstring: a synchronization code that allows a client player to synchronize the analysis data to the audio waveform with sample accuracy, regardless of its decoder type or version. See Synchstring section*.
rhythmstring: a representation of spectro-temporal transients as binary events. This temporal data distributed on 8 frequency channels aims to be independent of timbre and pitch representations. See Rhythmstring section*.
*Echonest API Docs
Synchstring, Rhythmstring
Synchdata decoding - Github
Related
I have been trying to understand how DASH works, mainly the MPD and how a remote client boots-up to play a stream. Of many parameters in the MPD, the Initialization range and SegmentBase indexRange seems to be of much interest. If I understand it right, these values give the base URL and the mappings to key-frames that must be retrieved if the client seeks/rewinds a video.
My question is if these values can be seen before I actually play a video. For example, can I use a tool like youtube-dl to download these byte-ranges and decode them in a way that is human readable?
Much appreciated.
-Jamie
I'm also starting to look into DASH so take my answer with a grain of salt.
The SegmentBase is used when you have a single segment in a representation. For multiple segments there's SegmentList and SegmentTemplate. You can find more in this MPEG-DASH overview.
For MPEG-DASH the SegmentBase indexRange attribute points to the location of the sidx box (Segment Index Box). The box contains information about the sub-segments and random access points for seeking etc. There's more info in this Quick Tutorial on MPEG-DASH.
In the case of WebM-DASH the segment index corresponds to the Cues element.
The Initialization range attribute points to the initialization segment.
If the server supports it you could issue HTTP Range requests to get the data but you'll need to parse it.
There's a Node.js ISO BMFF parser here: iso-bmff-parser-stream and the DASH-IF reference client implementation in JavaScript can be found at: dash.js.
For WebM the Cues can be read using mkvinfo, as reported by #jamie.
I have recently installed FreeBPX with asterisk included. I activated the rest interface, so I can see /ari/asterisk/info and it responds with a JSON. Now I want to see all my call recordings. I configured recordings and the server saves them in wav format. It's ok, but how can I see them through json/rest? I tried open /ari/asterisk/recordings, but it responds with "resource not found".
As yo can see in the docs, you can use:
GET /recordings/stored/{recordingName}
EDIT: You can see the list of recordings stored with
GET /recording/stored
You are missing the point here, the ARI recordings interface isn't meant to be used with the files that you have stored via FreePBX. The recordings API is meant to allow you to manage recordings, from within a Stasis application. That means, start a recording from a Stasis application and manage it. If the recording had been performed outside of Stasis, the ARI engine will not be aware of it.
Well, at least that's what it's supposed to do.
Nir
This is partly doable - FreePBX doesn't seem to use the native Asterisk recording APIs so you can only retrieve the filename
First get all the channels:
GET /ari/channels
Find your channel's ID from the response's id field
Then you can request the variable CALLFILENAME from the channel's variable endpoint:
GET /ari/channels/{id}/variable?variable=CALLFILENAME
I am using TheMovieDB API for getting information about several movies. The API supports usage of different languages.
https://www.themoviedb.org/documentation/api
I am aware that the API has a function to get the current language being set.
Is there any way to get the list of all supported languages?
In my app, I am planning to add a option for the users to select their desired language to see the movie details in their language.
Looks like there is no way to get all the supported languages list from the API. The API docs does not have any information on that.
I am closing this questions, as other might look for a solution on this.
If I am wrong, let me know.
Try this one http://docs.themoviedb.apiary.io/#reference/movies/movieidtranslations
Hope it will be usefull
You can get the official list of supported languages from this endpoint:
https://developers.themoviedb.org/3/configuration/get-primary-translations
As of today the endpoint returns the following ISO codes:
"ar-AE",
"ar-SA",
"bg-BG",
"bn-BD",
"ca-ES",
"ch-GU",
"cs-CZ",
"da-DK",
"de-DE",
"el-GR",
"en-US",
"eo-EO",
"es-ES",
"es-MX",
"eu-ES",
"fa-IR",
"fi-FI",
"fr-CA",
"fr-FR",
"he-IL",
"hi-IN",
"hu-HU",
"id-ID",
"it-IT",
"ja-JP",
"ka-GE",
"kn-IN",
"ko-KR",
"lt-LT",
"ml-IN",
"nb-NO",
"nl-NL",
"no-NO",
"pl-PL",
"pt-BR",
"pt-PT",
"ro-RO",
"ru-RU",
"sk-SK",
"sl-SI",
"sr-RS",
"sv-SE",
"ta-IN",
"te-IN",
"th-TH",
"tr-TR",
"uk-UA",
"vi-VN",
"zh-CN",
"zh-TW"
I tested some of the languages with the genres endpoint:
https://api.themoviedb.org/3/genre/movie/list
Some of them return null for the genre names (meaning they have not been translated).
You can see an overview of the translated material at https://www.themoviedb.org/contribute.
This url shows a way that api works for different languages but it doesn't say which languages are supported.
https://developers.themoviedb.org/3/getting-started/languages
The supported languages are listed in this link:
https://developers.themoviedb.org/3/configuration/get-languages
Notice that some languages exist but the whole details are not translated and some parts like the overview or sometimes the title is translated. I wish I could help. (don't forget to put your api key in the urls both links tells you)
It is specified by ISO code 639-1:
https://en.wikipedia.org/wiki/ISO_639-1
I found this in the java-API for TMDB
https://github.com/holgerbrandl/themoviedbapi/blob/master/src/main/java/info/movito/themoviedbapi/model/Discover.java#L69
List of ISO Codes
Is it possible for a Chrome extension to listen for streaming audio from any of the browser's tabs? I would like to capture the streaming audio data and then analyse it.
Thanks
You could try 3 ways, neither one does provide 100% guarantee to meet your needs.
Before going into more detailed descriptions, I must note that Chrome extensions do not provide convenient tools for working on per connection level - sufficiently low level, required for stream capturing. This is by design. This is why the 1-st way is:
To look at other browsers, for example Firefox, which provides low-level APIs for connections. They are already known to be used by similar extensions. You may have a look at MediaStealer. If you do not have a specific requirement to build your system on Chrome, you should possibly move to Firefox.
You can develop a Chrome extension, which intercepts HTTP-requests by means of webRequest API, analyses their headers and extracts media urls (such as containing audio/mpeg MIME-type, for example, in HTTP-headers). Just for a quick example of code you make look at the following SO question - How to change response header in Chrome. Having the url you may force appropriate media download as a file. It will land in default downloads folder and may have unfriendly name. (I made such an extension, but I do not have requirements for further processing). If you need to further process such files, it can be a challenge to monitor them in the folder, and run additional analysis in a separate program.
You may have a look at NPAPI plugins in general, and their streaming APIs in particular. I can imagine that you create a plugin registered for, again, audio/mpeg MIME-type, and receives the data via NPP_NewStream, NPP_WriteReady and NPP_Write methods. The plugin can be wrapped into a Chrome extension. Though I made NPAPI plugins, I never used this API, and I'm not sure it will work as expected. Nethertheless, I'm mentioning this possibility here for completenees. This method requires some coding other than web-coding, meaning C/C++. NB. NPAPI plugins are deprecated and not supported in Chrome since September 2015.
Taking into account that you have some external (to the extension) "fingerprinting service" in mind, which sounds like an intelligent data processing, you may be interested in building all the system out of a browser. For example, you could, possibly, involve a HTTP-proxy, saving media from passing traffic.
If you're writing a Chrome extension, you can use the Chrome tabCapture API to record audio.
chrome.tabCapture.capture({audio: true}, function(stream) {
var recorder = new MediaRecorder(stream);
[...]
});
The rest is left as an exercise to the reader; MDN has more documentation on how to use MediaRecorder.
When this question was asked in 2013, neither chrome.tabCapture nor MediaRecorder existed.
Mac OSX solution using soundflower: http://rogueamoeba.com/freebies/soundflower/
After installing soundflower it should appear as a separate audio device in the sound preferences (apple > system preferences > sound). Divert the computer's audio to the 2ch option (stereo, 16ch is surround), then inside a DAW, such as 'audacity', set the audio input as soundflower. Now the sound should be channeled to your DAW ready for recording.
Note: having diverted the audio from the internal speakers to soundflower you will only be able to hear the audio if the 'soundflowerbed' app is actually open. You know it's open if there's a 8 legged blob in the top right task bar. Clicking this icon gives you the sound flower options.
My privoxy has the following log:
2013-08-28 18:25:27.953 00002f44 Request: api.audioaddict.com/v1/di/listener_sessions.jsonp?_method=POST&callback=_AudioAddict_WP_ListenerSession_create&listener_session%5Bid%5D=null&listener_session%5Bis_premium%5D=false&listener_session%5Bmember_id%5D=null&listener_session%5Bdevice_id%5D=6&listener_session%5Bchannel_id%5D=178&listener_session%5Bstream_set_key%5D=webplayer&_=1377699927926
2013-08-28 18:25:27.969 0000268c Request: api.audioaddict.com/v1/ping.jsonp?callback=_AudioAddict_WP_Ping__ping&_=1377699927928
2013-08-28 18:25:27.985 00002d48 Request: api.audioaddict.com/v1/di/track_history/channel/178.jsonp?callback=_AudioAddict_TrackHistory_Channel&_=1377699927942
2013-08-28 18:25:54.080 00003360 Request: pub7.di.fm/di_progressivepsy_aac?type=.flv
So I got the stream url and record it:
D:\Profiles\user\temp>wget pub7.di.fm/di_progressivepsy_aac?type=.flv
--18:26:32-- http://pub7.di.fm/di_progressivepsy_aac?type=.flv
=> `di_progressivepsy_aac#type=.flv'
Resolving pub7.di.fm... done.
Connecting to pub7.di.fm[67.221.255.50]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [video/x-flv]
[ <=> ] 1,234,151 8.96K/s
I got the file that can be reproduced in any multimedia pleer.
I noticed that iTunes preview allows you to crawl and scrape pages via the http:// protocol. However, many of the links are trying to be opened in iTunes rather than the browser. For example, when you go to the iBooks page, it immediately tries opening a url with an itms:// protocol.
Are there any other methods of crawling the App Store or is this the only way?
Can the itms:// protocol links themselves be crawled somehow?
I would have a decent look at the iTunes Search API and the iTunes Enterprise Partner API
Search API -
http://www.apple.com/itunes/affiliates/resources/blog/introduction---search-api.html
Enterprise Partner API -
http://www.apple.com/itunes/affiliates/resources/documentation/itunes-enterprise-partner-feed.html
You might get most/all of the information you need in a nice JSON file format.
If you can't get the information you need with the API, I would be interested what it is :)
As phillipp mentioned, the iTunes search API is an easy way to retrieve data about your App Store listings in JSON format.
Simply query for this with your app id (you can find the app id by viewing the web listing for your app at itunes.apple.com), ex:
http://itunes.apple.com/lookup?id=INSERT_YOUR_APP_ID_HERE
then, parse the resulting JSON to your heart's content.
The only difference between http:// links and itms:// links is that you need to set your User-Agent to an iTunes user-agent, and depending on the version you may also have to include a verification code based on some not-so-secret algorithm.
For example this is the code for iTunes 9:
# Some magic. Generates a seed we use for X-Apple-Validation. Adapted from LWP::UserAgent::iTMS_Client.
function comp_seed($url, $user_agent) {
$random = sprintf( "%04X%04X", rand(0,0x10000), rand(0,0x10000) );
$static = base64_decode("ROkjAaKid4EUF5kGtTNn3Q==");
$url_end = ( preg_match("|.*/.*/.*(/.+)$|",$url,$matches)) ? $matches[1] : '?';
$digest = md5(join("",array($url_end, $user_agent, $static, $random)) );
return $random . '-' . strtoupper($digest);
}
However if you are only scraping, iTunes preview should work for your purposes, the link you gave us to the iBooks page had more than enough information to scrape.
We tried scraping ourselves too about a year ago and it just became too much of a headache. Philipp's comment is a good one as the enterprise feed from apple (need to apply for it with a legitimate use) does have a good amount of useful info that you might be after in scraping.
There are a few companies that offer data as a service too - abto and AppMonsta are two I heard of when I was looking. I can't seem to find abto anymore but http://appmonsta.com seems to be. The search API looks ok (never experimented) but limited.
Good luck!