I have been trying to understand how DASH works, mainly the MPD and how a remote client boots-up to play a stream. Of many parameters in the MPD, the Initialization range and SegmentBase indexRange seems to be of much interest. If I understand it right, these values give the base URL and the mappings to key-frames that must be retrieved if the client seeks/rewinds a video.
My question is if these values can be seen before I actually play a video. For example, can I use a tool like youtube-dl to download these byte-ranges and decode them in a way that is human readable?
Much appreciated.
-Jamie
I'm also starting to look into DASH so take my answer with a grain of salt.
The SegmentBase is used when you have a single segment in a representation. For multiple segments there's SegmentList and SegmentTemplate. You can find more in this MPEG-DASH overview.
For MPEG-DASH the SegmentBase indexRange attribute points to the location of the sidx box (Segment Index Box). The box contains information about the sub-segments and random access points for seeking etc. There's more info in this Quick Tutorial on MPEG-DASH.
In the case of WebM-DASH the segment index corresponds to the Cues element.
The Initialization range attribute points to the initialization segment.
If the server supports it you could issue HTTP Range requests to get the data but you'll need to parse it.
There's a Node.js ISO BMFF parser here: iso-bmff-parser-stream and the DASH-IF reference client implementation in JavaScript can be found at: dash.js.
For WebM the Cues can be read using mkvinfo, as reported by #jamie.
Related
I'm reading up on Cloud file storage, and ran across the PopulationPolicy property under Storage.Provider.StorageProviderSyncRootInfo, but I'm not sure what this does. The definition that msdn provided to just cut off. Under the Fields section, AlwaysFull sounds similar to how the first part of HydrationPolicyModifier's ValidationRequired field works ("it guarantees that the data returned by the sync provider is always persisted to the disk prior to it being returned to the user application"). I believe that hydration fills the placeholder object with the correct data from the cloud (correct me), but I'm confused about what populate does.
What is populating?
What does changing the PopulationPolicy to Full and AlwaysFull do?
Population is about files and folders (placeholders), not their content (Hydration).
If you don't use AlwaysFull (so the only valid value left is Full), the platform will call your engine back with CF_CALLBACK_TYPE_FETCH_PLACEHOLDERS, otherwise this type of callback will not be used.
guys! I'm looking for a solution or some ideas on how to solve my task.
There is a video surveillance camera(vendor: Hikvision) with an accessible web-interface.
In the web-interface, there is a field Device Name containing data I need to retrieve by means of the Zabbix server and further to use this data for renaming discovered hosts.
Since Hikvision cameras support SNMP, I've tried the SNMP agent in Zabbix. I turned out that Hikvision MIB doesn't contain data from that field.
Also exploring web-interface through Developer tools in Google Chrome I stumbled upon the string Request URL: http://10.90.187.16/ISAPI/System/deviceInfo which gives such response in XML format:
<DeviceInfo xmlns="http://www.hikvision.com/ver20/XMLSchema" version="2.0">
<deviceName>1.5.1.1</deviceName>
<deviceID>566eec0b-6580-11b3-81a1-1868cb48861f</deviceID>
<deviceDescription>IPCamera</deviceDescription>
<deviceLocation>hangzhou</deviceLocation>
<systemContact>Hikvision.China</systemContact>
<model>DS-2CD2155FWD-IS</model>
<serialNumber>DS-2CD2155FWD-IS20170417AAWR749464587</serialNumber>
<macAddress>18:68:cb:48:86:1f</macAddress>
<firmwareVersion>V5.4.5</firmwareVersion>
<firmwareReleasedDate>build 170124</firmwareReleasedDate>
<encoderVersion>V7.3</encoderVersion>
<encoderReleasedDate>build 170123</encoderReleasedDate>
<bootVersion>V1.3.4</bootVersion>
<bootReleasedDate>100316</bootReleasedDate>
<hardwareVersion>0x0</hardwareVersion>
<deviceType>IPCamera</deviceType>
<telecontrolID>88</telecontrolID>
<supportBeep>false</supportBeep>
<supportVideoLoss>false</supportVideoLoss>
</DeviceInfo>
Where the tag <deviceName>1.5.1.1</deviceName> contains required data and now the question is how to put two and two together by means of Zabbix.
Digging into Zabbix documentation I've found an article about creating an Item based on HTTP agent with XML request . Unfortunately there are not any exmaples how to do it exactly.
Has somebody had such experience? Any clues will be helpful
You can create an HTTP Agent item, set it to TEXT type and point it to http://10.90.187.16/ISAPI/System/deviceInfo (don't forget the authentication, if required!), Zabbix will retrieve the full XML.
To get the desired value you have to create a dependent item, point it to the previous item and set up a preprocessing step.
Create a single XML Xpath preprocessing rule with parameter string(/DeviceInfo/DeviceName) to get the 1.5.1.1 value
If you want to get the firmware version, create another dependent item and set up the XPath to string(/DeviceInfo/FirmwareVersion) and so on for every element you need.
If you want a single value you can use a single item, adding the preprocessing rule to the http agent item. I use my solution for flexibility, maybe one day I'll need another XML element or maybe a firmware update will add some element to the page.
Dependent items are more flexible, but of course the full XML uses more storage in the database for stuff you don't need right now: it's a tradeoff, either way works!
On Spotify Developer there is a description of the JSON format that is returned on "Get Audio Analysis for a Track". However, there is no information on "track.codestring", "track.echoprintstring" and "track.rhythmstring". Anyone who knows the definition of the information that is hidden in these long strings?
I'm currently embarking on machine/deep learning applied for Music Information Retrieval.
There seems to be no way to directly adress this question on developer.spotify. So I roamed the web but couldn't find an answer.
This is in the JSON-example on 'https://developer.spotify.com/documentation/web-api/reference/tracks/get-audio-analysis/'
"codestring": "eJxVnAmS5DgOBL-ST-B9_P9j4x7M6qoxW9tpsZQSCeI...",
"code_version": 3.15,
"echoprintstring": "eJzlvQmSHDmStHslxw4cB-v9j_A-tahhVKV0IH9...",
"echoprint_version": 4.12,
"synchstring": "eJx1mIlx7ToORFNRCCK455_YoE9Dtt-vmrKsK3EBsTY...",
"synch_version": 1,
"rhythmstring": "eJyNXAmOLT2r28pZQuZh_xv7g21Iqu_3pCd160xV...",
"rhythm_version": 1
This document for EchoNest seems to be describing the same properties as the Spotify API returns. (For an older version unfortunately)
Analyzer Documentation
I also recommend checking out kaleidosync a visualisation app based on Spotify/EchoNest.
kaleidosync demo
source on github
I'm super late to the party here, but I'll share my findings in case they help someone else. The below is from what looks like an archived version of the original Echo Nest Analyzer Documentation (v3.2). I've extracted a bit of it below and have provided a link to where I was able to browse the document.
Output Data
track data
codestring, echoprintstring: these represent two different audio fingerprints computed on the audio and are used by other Echo Nest services for song identification.
synchstring: a synchronization code that allows a client player to synchronize the analysis data to the audio waveform with sample accuracy, regardless of its decoder type or version. See Synchstring section*.
rhythmstring: a representation of spectro-temporal transients as binary events. This temporal data distributed on 8 frequency channels aims to be independent of timbre and pitch representations. See Rhythmstring section*.
*Echonest API Docs
Synchstring, Rhythmstring
Synchdata decoding - Github
I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.
I have recently installed FreeBPX with asterisk included. I activated the rest interface, so I can see /ari/asterisk/info and it responds with a JSON. Now I want to see all my call recordings. I configured recordings and the server saves them in wav format. It's ok, but how can I see them through json/rest? I tried open /ari/asterisk/recordings, but it responds with "resource not found".
As yo can see in the docs, you can use:
GET /recordings/stored/{recordingName}
EDIT: You can see the list of recordings stored with
GET /recording/stored
You are missing the point here, the ARI recordings interface isn't meant to be used with the files that you have stored via FreePBX. The recordings API is meant to allow you to manage recordings, from within a Stasis application. That means, start a recording from a Stasis application and manage it. If the recording had been performed outside of Stasis, the ARI engine will not be aware of it.
Well, at least that's what it's supposed to do.
Nir
This is partly doable - FreePBX doesn't seem to use the native Asterisk recording APIs so you can only retrieve the filename
First get all the channels:
GET /ari/channels
Find your channel's ID from the response's id field
Then you can request the variable CALLFILENAME from the channel's variable endpoint:
GET /ari/channels/{id}/variable?variable=CALLFILENAME