I'm using Windows Azure Media Services to upload videos, I want to know if there is a way to merge several video files into a single file.
If it's impossible, which the best API for video merge effects, I already sent a request to get access to Animoto API, but still no response.
WAMS does not offer that feature/functionality.
You can do that sort of thing with many tools prior to upload, this is considered video editing.
Wams offers a range of platform services: secure uploading, encoding, encryption, streaming, adaptive streaming, dynamic transmuxing, secure egress, with many more features to come.
But video editing is not on the roadmap.
Related
Apparently I used the totally wrong keywords while googling because Im looking for solutions on how to embed videos in my webpage and still make "impossible" (i.e. make it hard) to download these directly as a mp4 file. I mean there are various players where you'll quite easily find out the original file on the webserver directly within the browser...
And on the opposite there are pages like youtube where you cannot really find out the full file but you'd have to use third party solutions to download the files.
Do you know any libraries / modules which support embedding in such a way like youtube?
Thanks
It really is not that hard to download/capture the file if you are making it available to stream to a device, even for YouTube videos, so you have to consider what your goals are.
Most content protection systems, or Digital Rights Management systems, don't really attempt to stop someone capturing the file. Rather they try to ensure that the captured file is of no use by having it encrypted so it cannot play back.
The tricky part then moves to securely sharing the decryption key with authorised users in a way that neither they nor a third party can view or share the key. This is the essence of nearly all common DRM systems.
If you do want to use DRM but don't want to pay for a full DRM solution then you could use clear key encryption with MPEG-DASH streaming. This essentially transmits the key with the stream so it not very secure, but it may meet your needs. There is some info on using it with a cloud encoding service here:
https://bitmovin.com/tutorials/mpeg-cenc-clearkey-drm-encryption/
I've seen using the html5 api it's possible to record/upload video content straight from the browser. This issue in a project I'm currently working on is the video recordings can be very long/big and I'd like to mitigate upload time for the user.
Ideally the video would be uploaded in one of two ways:
As it's being recorded (streaming upload).
For worse network connections, upload the video in smaller chunks (so store locally and then upload a chunk every 5 minutes, let's say).
Does anyone have any guidance on if these could practically work with the current level of html5 functionality and if so, if there any good resources on the subject?
WebRTC based MediaStream Recording (http://www.w3.org/TR/mediastream-recording/) sounds like it is what you are looking for, as Robert suggest in the comments.
There is a Javascript library available on GitHub which looks like it should meet your needs:
https://github.com/streamproc/MediaStreamRecorder
In particular they note:
MediaStreamRecorder is useful in scenarios where you're planning to submit/upload recorded blobs in realtime to the server! You can get blobs after specific time-intervals.
How do I achieve streaming of audio and video data and pass it on the network. I gone through a good article Here, But did not get in depth. I want to have chat application in HTML5
There are mainly below question
How to stream the audio and video data
How to pass to particular IP address.
Get that data and pass to video and audio control
If you want to serve a stream, you need a server doing so, by either downloading and installing, or coding on your own.
Streams only work in one direction, there is no responding or "retrieve back". Streaming is almost the same as downloading, with slight differences, depending on the service and use case.
Most streams are downstreams, but there are also upstreams. Did you hear about BufferStreams in PHP, Java, whatever? It's basically the same: data -> direction -> cursor.
Streams work over many protocols, even via different network layers, for example:
network/subnet broadcast, peer 2 peer, HTTP, DLNA, even FTP streams, ...
The basic nature of a stream is nothing more than data beeing sent to an audience.
You need to decide:
which protocol do you want to use for streaming
which server software
which media / source / live or with selectable start/end
which clients
The most popular HTTP streaming server is Shoutcast by Nullsoft (Winamp).
There is also DLNA which afaik is not HTTP based.
To provide more information, you need to be more specific regarding your basic requirements and decisions.
I am building an application that allows authenticated users to use a Web browser to upload MP3 audio files (of speeches) to a server, for distributing the audio on a network. The audio files need to use a specific bit rate (32kbps or less) to ensure efficient use of bandwidth, and an approved sampling rate (22.050 or 44.100) to maximize compatibility. Rather than validate these requirements following the upload using a server-side script, I was hoping to use HTML5 FileReader to determine this information prior to the upload. If the browser detects an invalid bit rate and/or sampling rate, the user can be advised of this, and the upload attempt can be blocked, until necessary revisions are made to the audio file.
Is this possible using HTML5? Please note that the question is regarding HTML5, not about my application's approach. Can HTML5 detect the sampling rate and/or bit rate of an MP3 audio file?
FYI note: I am using an FTP java applet to perform the upload. The applet is set up to automatically forward the user to a URL of my choosing following a successful upload. This puts the heavy lifting on the client, rather than on the server. It's also necessary because the final destination of each uploaded file is different; they can be on different servers and different domains, possibly supporting different scripting languages on the server. Any one server would quickly exceed its storage space otherwise, or if the server-side script did an FTP transfer, the server's performance would quickly degrade as a single point of failure. So for my application, which stores uploaded audio files on multiple servers and multiple domains, validation of the bit rate and sampling rate must take place on the client side.
You can use FileReader API and Javascript built audio codecs to extract this information from the audio files.
One library providing base code for pure JS codecs is Aurora.js - then the actual codec code is built upon it
https://github.com/audiocogs/aurora.js/wiki/Known-Uses
Naturally the browser must support FileReader API.
I didn't understand from your use case why you need Java applet or FTP. HTTP uploads work fine for multiple big files if done properly using async badckend (like Node.js, Python Twisted) and scalable storage (Amazon S3). Similar use case is resizing incoming images which is far more demanding application than extracting audio metadata out from the file. The only benefit on the client side is to reduce the number of unnecessary uploads by not-so-technically-aware users.
Given that any user can change your script/markup to bypass this or even re-purpose it, I wouldn't even consider it.
If someone can change your validation script with a bit of knowledge of HTML/Javascript, don't use HTML/Javascript. It's easier to make sure that it is validated, and validated correctly by validating it on the server.
This may be too broad a question, but how is soundcloud actually programmed?
To be more specific,
What language was used to program it?
How does it display the frequency data?
If user uploads a file in a format different from MP3, is it converted MP3 or gets played as is? If former, how does the conversion work?
How does it "graphically" appear on a browser as it does? Is it also HTML 5 thing which I don't know anything about?
I'm a big fan of the soundcloud and couldn't stop wondering how all of these work!
Please help me out :)
SoundCloud developer here,
The API and the current website are built with Rails. For information about the architecture/infrastructure and how it evolved over the last 5 years, check out Evolution of SoundCloud's Architecture. The "next" version of the website (still in private beta) is built entirely with Javascript, and just uses the API to get its data. There's a lot more detail available in Building The Next SoundCloud.
I'm not sure exactly what language/libraries are used to process the audio, but many audio libraries do provide frequency data, and we just extract that.
Users can upload AIFF, WAVE (WAV), FLAC, OGG, MP2, MP3, AAC, AMR or WMA files. The originals are kept exactly as is for the download option, but for streaming on the site, they're converted to 128kbps MP3 files. Again, I'm not sure of the software/libraries, but I'm pretty sure it'd be ffmpeg.
For displaying the waveforms, on the back-end when the audio files are processed when they're uploaded, the waveform data is saved into a PNG file. On the current version of the website, we simply load that file. On Next, the png is processed to get the original data back out, and then it is drawn to a canvas at the exact dimensions needed (which keeps the image crisp). We're currently experimenting with getting waveform data in a JSON format to speed up this process.
I am copying the following info posted by David Noël somewhere else in 2010.
Web tier: Varnish, nginx, haproxy, thin
Data Management: Cassandra, MongoDB, mySQL master/slave cluster, memcached
Web framework: Ruby on Rails
CDN: Akamai and Edgecast
Transcoding/storage: AWS EC2/S3