Hello everyone I have a small winrt aplication that downloads video from internet and I was trying to implement backgrounddownloader and filesavepicker together but I run on errors for every type of implementation I searched google and I searched microsoft documentation but nothing.I implemented download via HttpClient class but what I want is to get download progress and HttpClient doesn't offer it.Thx in advance
Here's a quick sample, how to do it:
// set download URI
var uri = new Uri("http://s3.amazonaws.com/thetabletshow/thetabletshow_0072_lhotka.mp3");
// get destination file
var picker = new FileSavePicker();
// set allowed extensions
picker.FileTypeChoices.Add("MP3", new List<string> { ".mp3" });
var file = await picker.PickSaveFileAsync();
// create a background download
var downloader = new BackgroundDownloader();
var download = downloader.CreateDownload(uri, file);
// create progress object
var progress = new Progress<DownloadOperation>();
// attach an event handler to get notified on progress
progress.ProgressChanged += (o, operation) =>
{
// use the progress info in Progress.BytesReceived and Progress.TotalBytesToReceive
ProgressText.Text = operation.Progress.BytesReceived.ToString();
};
// start the actual download
await download.StartAsync().AsTask(progress);
You should be able to modify it for your needs from here on.
Related
I've build an application on the Azure (microsoft) emotion API, but that was just merged with their cognitive services face API. I'm using a webcam to send an image (in binary data) to their server for analysis, and used to get an xml in return. (I've already commented out some old code, in this example. Trying to get it fixed).
function saveSnap(data){
// Convert Webcam IMG to BASE64BINARY to send to EmotionAPI
var file = data.substring(23).replace(' ', '+');
var img = Base64Binary.decodeArrayBuffer(file);
var ajax = new XMLHttpRequest();
// On return of data call uploadcomplete function.
ajax.addEventListener("load", function(event) {
uploadcomplete(event);
}, false);
// AJAX POST request
ajax.open("POST", "https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=emotion","image/jpg");
ajax.setRequestHeader("Content-Type","application/json");
//ajax.setRequestHeader("Accept","text/html,application/xhtml+xml,application/xml");
ajax.setRequestHeader("Ocp-Apim-Subscription-Key","subscription_key");
ajax.send(img);
}
now I understood from their website the call returns a JSON. But I just can't get it to work. I can see there is data coming back, but how do I even get the JSON out of it. I'm probably missing something essential, and hope someone can help me out. :) the program was working when I could still use the Emotion API.
function uploadcomplete(event){
console.log("complete");
console.log(event);
//var xmlDoc = event.target.responseXML;
//var list = xmlDoc.getElementsByTagName("scores");
console.log(JSON.stringify(event));
A few issues to address:
You'll want to wait for the POST response, not just for the upload
completion.
You'll want to set the content type to be application/octet-stream if you are uploading a binary as you are.
You'll want to set the subscription key to the real value (you probably did before pasting your code here.)
.
function saveSnap(data) {
// Convert Webcam IMG to BASE64BINARY to send to EmotionAPI
var file = data.substring(23).replace(' ', '+');
var img = Base64Binary.decodeArrayBuffer(file);
ajax = new XMLHttpRequest();
ajax.onreadystatechange = function() {
if (ajax.readyState == XMLHttpRequest.DONE) {
console.log(JSON.stringify(ajax.response));
}
}
ajax.open('post', 'https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=emotion');
ajax.setRequestHeader('Content-Type', 'application/octet-stream');
ajax.setRequestHeader('Ocp-Apim-Subscription-Key', key);
ajax.send(img);
}
This question already has answers here:
Uploading Multiple Files to Google Drive with Google App Script
(5 answers)
Closed 6 years ago.
Here is my scenario. I've created an Add-On for Google Docs that acts as a video toolbox.
A feature I'm trying to add is the ability to record a video using the built in web cam (using videojs-recorder) and then link to that video within the doc. I've got the video part working, but not sure how to get the webm JS Blob converted into a Google Blob so I can create a file on the users Google Drive for sharing and linking.
Just to figure out how this might work this is what I've done so far without any luck.
CLIENT SIDE CODE
//event handler for video recording finish
vidrecorder.on('finishRecord', function()
{
// the blob object contains the recorded data that
// can be downloaded by the user, stored on server etc.
console.log('finished recording: ', vidrecorder.recordedData);
google.script.run.withSuccessHandler(function(){
console.log("winning");
}).saveBlob(vidrecorder.recordedData);
});
SERVER SIDE CODE
function saveBlob(blob) {
Logger.log("Uploaded %s of type %s and size %s.",
blob.name,
blob.size,
blob.type);
}
The errors I get seem to be related to serialization of the blob. But really the exceptions aren't very useful - and just point to some minimized code.
EDIT: Note that there is no FORM object involved here, hence no form POST, and no FileUpload objects, as others have indicated that this might be a duplicate, however it's slightly different in that we are getting a Blob object and need to save it to the server.
Thanks goes to Zig Mandel and Steve Webster who provided some insight from the G+ discussion regarding this.
I finally pieced together enough bits to get this working.
CLIENT CODE
vidrecorder.on('finishRecord', function()
{
// the blob object contains the recorded data that
// can be downloaded by the user, stored on server etc.
console.log('finished recording: ', vidrecorder.recordedData.video);
var blob = vidrecorder.recordedData.video;
var reader = new window.FileReader();
reader.readAsDataURL(blob);
reader.onloadend = function() {
b64Blob = reader.result;
google.script.run.withSuccessHandler(function(state){
console.log("winning: ", state);
}).saveB64Blob(b64Blob);
};
});
SERVER CODE
function saveB64Blob(b64Blob) {
var success = { success: false, url: null};
Logger.log("Got blob: %s", b64Blob);
try {
var blob = dataURItoBlob(b64Blob);
Logger.log("GBlob: %s", blob);
var file = DriveApp.createFile(blob);
file.setSharing(DriveApp.Access.ANYONE_WITH_LINK, DriveApp.Permission.COMMENT);
success = { success: true, url: file.getUrl() };
} catch (error) {
Logger.log("Error: %s", error);
}
return success;
}
function dataURItoBlob(dataURI) {
// convert base64/URLEncoded data component to raw binary data held in a string
var byteString;
if (dataURI.split(',')[0].indexOf('base64') >= 0)
byteString = Utilities.base64Decode(dataURI.split(',')[1]);
else
byteString = decodeURI(dataURI.split(',')[1]);
// separate out the mime component
var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];
return Utilities.newBlob(byteString, mimeString, "video.webm");
}
I created project based on:
https://github.com/Microsoft/real-time-filter-demo/tree/master/RealtimeFilterDemoWP
My question is how to enable flash light (torch) on WP8.1
Should I use MediaCapture() ?
var mediaDev = new MediaCapture();
await mediaDev.InitializeAsync();
var videoDev = mediaDev.VideoDeviceController;
var tc = videoDev.TorchControl;
if (tc.Supported)
{
if (tc.PowerSupported)
tc.PowerPercent = 100;
tc.Enabled = true;
}
when I used it it crash on
var videoDev = mediaDev.VideoDeviceController;
by unhandled exception
How to add flashlight to this sample project ?
You haven't initialized the MediaCaptureSettings, thus when you attempt to initialize the videoController the exception occurs. You need to initialize the settings, let MediaCapture know what device you'd like to use, and setup the VideoDeviceController. In addition, for Windows Phone 8.1 camera drivers, some require you to start the preview, or others require you to start video recording to turn on flash. This is due to the flash being tightly coupled with the camera device.
Here's some general code to give you the idea. *Disclaimer, this is untested. Best sure to call this in an async Task method so you can assure the awaited calls complete before you attempt to toggle the Torch Control property.
private async Task InitializeAndToggleTorch()
{
// Initialize Media Capture and Settings Objects, mediaCapture declared global outside this method
mediaCapture = new MediaCapture();
MediaCaptureInitializationSettings settings = new MediaCaptureInitializationSettings();
// Grab all available VideoCapture Devices and find rear device (usually has flash)
DeviceInformationCollection devices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
DeviceInformation device = devices.FirstOrDefault(x => x.EnclosureLocation != null && x.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Back);
// Set Video Device to device with flash obtained from DeviceInformation
settings.VideoDeviceId = device.Id;
settings.AudioDeviceId = "";
settings.PhotoCaptureSource = PhotoCaptureSource.VideoPreview;
settings.StreamingCaptureMode = StreamingCaptureMode.Video;
mediaCapture.VideoDeviceController.PrimaryUse = Windows.Media.Devices.CaptureUse.Video;
// Initialize mediacapture now that settings are configured
await mediaCapture.InitializeAsync(settings);
if (mediaCapture.VideoDeviceController.TorchControl.Supported)
{
// Get resolutions and set to lowest available for temporary video file.
var resolutions = mediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties(MediaStreamType.VideoRecord).Select(x => x as VideoEncodingProperties);
var lowestResolution = resolutions.OrderBy(x => x.Height * x.Width).ThenBy(x => (x.FrameRate.Numerator / (double)x.FrameRate.Denominator)).FirstOrDefault();
await mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoRecord, lowestResolution);
// Get resolutions and set to lowest available for preview.
var previewResolutions = mediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties (MediaStreamType.VideoPreview).Select(x => x as VideoEncodingProperties);
var lowestPreviewResolution = previewResolutions.OrderByDescending(x => x.Height * x.Width).ThenBy(x => (x.FrameRate.Numerator / (double)x.FrameRate.Denominator)).LastOrDefault();
await mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoPreview, lowestPreviewResolution);
// Best practice, you should handle Media Capture Error events
mediaCapture.Failed += MediaCapture_Failed;
mediaCapture.RecordLimitationExceeded += MediaCapture_RecordLimitationExceeded;
}
else
{
// Torch not supported, exit method
return;
}
// Start Preview
var captureElement = new CaptureElement();
captureElement.Source = mediaCapture;
await mediaCapture.StartPreviewAsync();
// Prep for video recording
// Get Application temporary folder to store temporary video file folder
StorageFolder tempFolder = ApplicationData.Current.TemporaryFolder;
// Create a temp Flash folder
var folder = await tempFolder.CreateFolderAsync("TempFlashlightFolder", CreationCollisionOption.OpenIfExists);
// Create video encoding profile as MP4
var videoEncodingProperties = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Auto);
// Create new unique file in the Flash folder and record video
var videoStorageFile = await folder.CreateFileAsync("TempFlashlightFile", CreationCollisionOption.GenerateUniqueName);
// Start recording
await mediaCapture.StartRecordToStorageFileAsync(videoEncodingProperties, videoStorageFile);
// Now Toggle TorchControl property
mediaCapture.VideoDeviceController.TorchControl.Enabled = true;
}
Phew! That's a lot of code just to toggle flash huh? Good news is this is fixed in Windows 10 with new Windows.Devices.Lights.Lamp API. You can do same work in just a few lines of code:
Windows 10 Sample for Windows.Devices.Lights.Lamp
For reference, check this thread:
MSDN using Win8.1 VideoDeviceController.TorchControl
var uri = new System.Uri("ms-appx:///Assets/FixesViaMail_17Dec.pdf", UriKind.Absolute);
StorageFile file = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(uri);
This code is working very well because i m using my system file path but whenever i am using this code
var uri = new System.Uri("http://www.visa.com/assets/upload/11458/Hotel/Voucher114581423144270.pdf", UriKind.Absolute);
StorageFile file = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(uri);
then i am getting error that is.....
Value does not fall within the expected range.
Please Help me
GetFileFromApplicationUriAsync is only used for loading a file from within your application. To read a file from the web you'll need to download it. You can do this with either the HttpClient (for small files) or BackgroundDownloader (for large files) classes.
var uri = new Uri("http://www.visa.com/assets/upload/11458/Hotel/Voucher114581423144270.pdf");
var httpClient = new HttpClient();
// Always catch network exceptions for async methods
try
{
var response = await httpClient.GetAsync(uri);
// save response out
}
catch
{
// Details in ex.Message and ex.HResult.
}
See Connecting to an HTTP server using Windows.Web.Http.HttpClient (XAML) for more details.
var uri = new Uri("http://www.visa.com/assets/upload/11458/Hotel/Voucher114581423144270.pdf");
BackgroundDownloader download = new BackgroundDownloader();
DownloadOperation download = downloader.CreateDownload(uri, destinationFile);
See Transferring data in the background for more details.
I am trying to upload a file to the server. I'm doing it like this:
var fileRef:FileReference = new FileReference();
fileRef.addEventListener(flash.events.Event.SELECT, selectHandler);
fileRef.addEventListener(flash.events.Event.COMPLETE, completeHandler);
fileRef.addEventListener(ProgressEvent.PROGRESS, normalprogressHandler);
fileRef.browse();
function selectHandler(event:flash.events.Event):void
{
var params:URLVariables = new URLVariables();
params.date = new Date();
params.ssid = "94103-1394-2345";
var request:URLRequest = new URLRequest("http://www.test.com/Uploads");
request.method = URLRequestMethod.POST;
request.data = params;
fileRef.upload(request, "Custom1");
}
function completeHandler(event:flash.events.Event):void
{
trace("uploaded");
}
function normalprogressHandler(event:ProgressEvent):void
{
var percent:Number = Math.floor((event.bytesLoaded * 100)/ event.bytesTotal );
trace(percent+"%");
}
Would it be possible to upload a file but without browsig for it? I want to decide myself what file to upload instead of the user performing a browse first
You can't do that with FileReference which has the following limitations (reference):
The load() and save() APIs can only be called in response to user
interaction (such as a button click).
The locations of the loaded and save files are not exposed to
ActionScript.
The APIs are asynchronous (non-blocking).
Clearly, it would represent a major security risk if the Flash player was arbitrarily allowed to upload anything from your local file system to a remote server.
If you're trying to upload from an AIR app, you can do what you're trying to do with the File class.