Check end of transcoding for vimeo-api? - vimeo

I need to be sure by an API request that the transcoding of an uploaded video is full ended and all possible renditions are available.
I know that that the API give responses in [body][status] about
available, uploading, transcoding, uploading_error, transcoding_error
but the problem is that the status changes from 'transcoding' to 'available' in the second when the first rendition is transcoded. So how can I check by an API-Request that vimeos work has full ended and no other rendition will be added to the video in the next minutes.
Thanks

If you are a PRO user and there is a specific size you are looking for you should wait for the file to appear in the files list. Waiting for "all" is not recommended because we are constantly changing and improving the list of files available.
If you are not a PRO user, this information is not exposed at all. Once the SD version is available you will be able to generate embed codes, and shortly after that your player will have the ability to switch into HD mode (once the HD file is available).

Related

Videos don't loop and seek in Chrome

Two issues with mp4 videos hosted on Microsoft Azure in Google Chrome only:
Background video don't loops (implemented by vide.js)
Video don't seeks by vjs player.
I know, server should send video files with http status 206. But my file sends with 200 at the first time, and if it doesn't made full download, problem still remains. How to setup right sending video files on Azure?
The reason why you cannot seek within the video is caused by the storage version.
By default, the version is set to 2009-09-19. Unfortunately, this version does not support the Range Header which is required to seek within a video (see https://msdn.microsoft.com/en-us/library/azure/ee691967.aspx). Therefore, you have to change the default version to at least 2011-08-18.
There are several way to change the default version.
From .NET you can use CloudBlobClient.SetServiceProperties Method.
Another option is using the utility from https://github.com/Plasma/AzureBlobUtility to change the default service version.
Also you can use the Azure REST API to set the version as documented on https://msdn.microsoft.com/en-us/library/azure/hh452235.aspx.
However the Authentication part is a bit tricky.

How do you change source in a web audio context

I'm making a game that changes some of it's object depending on what music is playing. After each song has ended I want my audio context to load in a new source and analyze that. However whenever I tried to do that I've gotten the error that an audio object or buffer can't be called twice.
After some researching I learned that ctx.createMediaElementSource(MyHTML5AudioEl) lets you create a sourceNode that takes the source from a HTML5 object. With this I was able to loop through different song.
However for my game I need to play/analyze a 30 seconds "remote url" that comes out of the Spotify API. I might be wrong but ctx.createMediaElementSource(MyHTML5AudioEl)does not allow you to analyze a source that is on a remote site.
Also the game needs to work on Mobile Chrome, which createMediaElementSource(MyHTML5AudioEl) does not seem to work on.
I might be on the completely wrong path here but my main question is:
How can I switch remote songs urls in web audio api. With it being compatible with mobile chrome.
Thanks!
First, as you found out, you can't set the buffer again for an AudioBufferSource. The solution is to create a new one. AudioBufferSources are intended to be light-weight objects that you can easily create and use.
Second, in Chrome 42 and newer, createMediaElementSource requires appropriate CORS access so you have to make sure the remote url sends the appropriate headers and you set the crossOrigin attribute appropriately.
Finally, Mobile Chrome currently does not pass the data from an audio element through createMediaElementSource.

Storing local audio file data across page loads in HTML5

I am trying to make a webapp that will load a page from a remote server, but allow the user to play audio from files that are on their local drive (not downloaded from the remote server). I am able to get this to work, but I also need it to save what the user has done for subsequent visits. For example: the user loads a page, clicks a "choose file" button, selects an mp3, and plays it. The user then closes the browser, opens it again, returns to the page, and is able to play the same audio without having to select it again.
I understand that the audio playback is separate from the saving of the user's selection, but in this case one seems to dictate the other.
I am able to get the select-and-play functionality to work with this:
<html><body>
<script type='text/javascript'>
function handleFiles(files){
var file = window.URL.createObjectURL(files[0]);
document.getElementById('audioPlayer').src = file;
}
</script>
<audio id='audioPlayer' controls ></audio>
<input type='file' id='selectedFile'
onchange='handleFiles(this.files)' />
</body></html>
...but I do not know how to store the selected file data in a way that I can automatically load it on the next visit. What can I use to store that file location (or even the whole file itself if it comes to it) so that I can still play the audio without the user selecting the file again?
I kind of suspect that saving the local file url somehow may not be possible for security reasons, since auto-playing a file from the local file system without user interaction could be bad news.
File handles from File open dialog are not recycleable across different page load sessions.
The best you can do this that you copy audio data to a HTML5 localStorage and play it from there. Or upload the data to your server and play it from there.
http://docs.webplatform.org/wiki/apis/web-storage/Storage/localStorage
localStorage is limited to few megabytes depending on the browser.
At this time, Mikko's answer is the correct answer for my question, but I thought I'd share a possible alternative for anyone else who comes across this thread:
The FileSystem API looks like it would perfectly suit my needs in this case, but at the time of this writing, it is only supported in Chrome. If audio playback is a minor add-on feature to your webapp though, this might be an option to give Chrome users a better experience and other users would just be unaware that they're missing out.
In this HTML5 Rocks article, the author shows how to use it, including how to copy user-selected files into a local disk sandbox and how to get a url (needed in my case to audio playback) to those files.

Upload video to YouTube via V3-API with Flex/AS3

I am currently working on an AIR-Application to upload videos to youtube. Since I got the very absurd requirement to upload files up to 80GB (we do not need to discuss this, I also think it´s nonsense) I decided to use the resumable upload for uploading chunks, like descriped on https://developers.google.com/youtube/v3/guides/using_resumable_upload_protocol
But for some reason, if I add the Content-Range header, I always receive an Error #2032. If I do not add the Content-Range header, the upload works, but only for the first chunk.
Has anyone managed to upload a file with the V3 API in AS3/Flex ?
Error 2032 usually occurs when your program or running application goes non-responsive. Some common reasons for this..
Your Proxy Settings may be not valid
Your Website must be in the list of restrictions
Your cookies must be corrupted
Your ADD-ON's on your browser may be responsible
Your registry must be corrupted
Source : Adobe Forums
Okay, I found out, what the "problem" is.
After uploading a chunk, the youtube servers return HTTP 308, which will be assessed as an error. Actually it isn't, it is youtubes status for "Resume incomplete". So the solution is simply to add an EventListener for HTTP_RESPONSE_STATUS, check for status 308 and then keep sending the next chunk. HTTP 200, and therefore the COMPLETE-Event will only be fired after uploading the last chunk

Record Video from Browser using Webcam and Microfone inputs

I need to record a video through user browser using input from camera and microphone and send to my server. Since html5 still doesn't make that magic happen, I'm looking for flash solutions.
Do I really need some flash media server to do that, or can I do a POST request?
I want to get both inputs(webcam and microphone), put them in a .flv and send to my server.
I've seen some implementations using bytearrays to record and send, audio and video separated. The problem is that it generates a series of synchronization problems when you try to compose them in a single file.
If you're still looking for a solution check out:
http://framebase.io
They have an embed-able recording widget that can transcode the videos automatically. I'd check out the docs, but on success, you can run an API call to check the status of transcoding and download it to your server or you can just use your own S3 bucket.