I have captured 3 videos on my mobile which is by default stored on the phone gallery (Gallery/videos/). I have to play these 3 videos in one of my flex mobile application. How can I get the videos to the flex project? if I need to browse the mobile directory means kindly help me with some code to do so.
I too am looking for an answer to this question. Right now, based on other Stackoverflow discussions, exhaustive perusal of tutorials and Adobe documentation, and comments to both (often the more useful resource), I'm coming to the conclusion that it's not possible.
you can use CameraRoll.browseForImage() and open the iOS gallery of photos to see all entities of MediaType.IMAGE, but it will not show you MediaType.VIDEO
you can use CameraUI to launch the system camera by delegation and that returns a MediaPromise, but as far as I can tell, it does not save the video you capture anywhere, and I cannot find a way to access the captured video using the MediaPromise (at least using the Loader class)
Here's my code as a hint in that direction. The second code block is using the CameraRoll to browseForImage() but there is no browseForVideo() in the API.
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
if (CameraRoll.supportsBrowseForImage)
{
roll = new CameraRoll();
roll.addEventListener(MediaEvent.SELECT, cameraRollEventComplete);
roll.addEventListener(Event.CANCEL, cameraCanceled);
roll.addEventListener(ErrorEvent.ERROR, cameraError);
roll.browseForImage();
}
else
{
statusText.text = "Camera roll not supported on this device.";
startTimer();
}
I've since found that Videos captured using the delegated system camera are stored in a temporary storage location that iOS -DOES!- allow access to. (I was pleasantly shocked.)
The Captured video is not added to the device's Camera Roll as other videos captured using the iOS System Camera app, so it's not enough to capture video and expect to be able to access it later (if, for instance, CameraRoll.browseForVideo() is ever added to the API.
Therefore, you have to 'get while the getting is good' and move the file from the temporary storage location to some non-volatile location such as ApplicationStorageDirectory or the user's Documents directory (The only options in iOS I think).
The MediaPromise... I think... is completely useless for accessing the video via any direct progressive loader/streamer method, but still provides the location/url/path/filename of the temporary file so you can perform File operations on it.
Ironic that there are tutorials for getting around the lack of a file location/url/path/filename in the MediaPromise when using CameraRoll.browseForImage()... and that method is to use a loader class to load the image content (which you can then write out to a file), but when taking video, the video content is not accessible, and instead a file location/url/path/filename is provided. Ironic that there are nearly no resources I was able to find to help with this also. grumble
I'm going to include some code chunks w/o really editing them to strip out extraneous bits because it's way past when I need to be in bed, but I wanted you to have this. I may come clean it up later.
This section is in a Spark SkinnablePopUpContainer and I use the same click event for several buttons, thus the below 'case' is in the switch-case in that event handler function.
In case you are not familiar, the 'close(true, data)' is the method to close the SkinnablePopUpContainer, tell the parent/owner that the container was closed purposefully and that it should look for the data object being shared back (i.e., there are changes to be 'commit'ed).
case "cameraVideo":
{
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
break;
}
protected function cameraCanceled(event:Event):void
{
statusText.text = "Camera access canceled by user.";
startTimer();
}
protected function cameraError(event:ErrorEvent):void
{
statusText.text = "There was an error while trying to use the camera.";
startTimer();
}
protected function videoMediaEventComplete(event:MediaEvent):void
{
statusText.text="Preparing captured video...";
camera.removeEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.removeEventListener(Event.CANCEL, cameraCanceled);
camera.removeEventListener(ErrorEvent.ERROR, cameraError);
var media:MediaPromise = event.data;
data.MediaType = MediaType.VIDEO;
data.MediaPromise = media;
data.source = "camera video";
close(true,data)
}
This section is the Actionscript in the close handler of the parent/owner of the SkinnablePopUpContainer (truncated once the useful code is included)
private function choosePictureLightboxClosed(event:PopUpEvent):void
{
imageButtonsActive = false;
if(event.commit)
{
this.data = event.data as Object;
filters = new Array();
selection = true;
switch(data.MediaType)
{
case MediaType.VIDEO:
{
mediaType = "video";
trace(data.MediaPromise.file.url + " - " + data.MediaPromise.relativePath + " - " +data.MediaPromise.mediaType);
var sourceFile:File = new File(data.MediaPromise.file.url);
var destinationFile:File = File.applicationStorageDirectory.resolvePath("User" +parentApplication.userid);
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath("Videos");
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath(parentApplication.userid+"Video"+new Date().getTime()+".mov");
trace(destinationFile.nativePath);
sourceFile.moveTo(destinationFile,true);
break;
}
I sure do hope this helps. This has been a very frustrating (and costly in terms of our project being government grant funded and having deadlines we utterly failed to meet), and I very much hope that these hard-won solutions might help others avoid the same experience.
Related
In our project, we use AudioContext to wire up input from a microphone to an AudioWorkletProcessor and out to a MediaStream. Ultimately, this is sent to other peers in a WebRTC call.
If someone loads the page, the audio always sounds fine. But if they connect with a hard-wired microphone like a laptop mic or webcam, then connect a bluetooth device (such as airpods or headphones), then the audio becomes distorted & robotic sounding.
If we tear out all the other code and simplify it, we still have the issue.
bypassProcessor.js
// Basic processor that wires input to output without transforming the data
// https://github.com/GoogleChromeLabs/web-audio-samples/blob/main/audio-worklet/basic/hello-audio-worklet/bypass-processor.js
class BypassProcessor extends AudioWorkletProcessor {
process(inputs, outputs) {
const input = inputs[0];
const output = outputs[0];
for (let channel = 0; channel < output.length; ++channel) {
output[channel].set(input[channel]);
}
return true;
}
}
registerProcessor('bypass-processor', BypassProcessor);
main.js
const microphoneStream = await navigator.mediaDevices.getUserMedia({
audio: true, // have also tried { channelCount: 1 } and { channelCount: { exact: 1 } }
video: false
})
const audioCtx = new AudioContext()
const inputNode = audioCtx.createMediaStreamSource(microphoneStream)
await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
inputNode.connect(processorNode).connect(audioCtx.destination)
Interestingly, I have found if you comment out the 2 audio worklet lines and instead create a simple gain node, then it works fine.
// await audioCtx.audioWorklet.addModule('worklet/bypassProcessor.js')
// const processorNode = new AudioWorkletNode(audioCtx, 'bypass-processor')
const gainNode = audioCtx.createGain()
Also if you simply create the AudioWorkletNode, but don't even connect it to the others, this also reproduces the issue.
I've created a small React app here that reproduces the problem: https://github.com/JacobMuchow/audio_distortion_repro/tree/master
I've tried some options such as detecting when this happens using 'ondevicechange' event, closing the old AudioContext & nodes and recreating everything, but this only works some of the time. If I wait for some time and then recreate it again, it works so I'm worried about some type of garbage collection issue with the processor when attempting this, but that might be beside the point.
I suspect this has something to do with sample rates... when the AudioContext is correctly recreated it switches from 48 kHz to 16 kHz and then it sounds find. But sometimes it is recreated with 48 kHz still and it continues to sound robotic.
Threads on the internet concerning this are incredibly sparse and I'm hoping someone has specific experience with this issue or this API and can point out what I need to do differently.
For Chrome, the problem is very likely https://crbug.com/1090441 that was recently fixed. I think Firefox doesn't have this problem but I didn't check.
I am currently looking to find a best way to store a incoming webrtc video streams. I am joining the videocall using webrtc (via chrome) and I would like to record every incoming video stream to from each participant to the browser.
The solutions I am researching are:
Intercept network packets coming to the browsers e.g. using Whireshark and then decode. Following this article: https://webrtchacks.com/video_replay/
Modifying a browser to store recording as a file e.g. by modifying Chromium itself
Any screen-recorders or using solutions like xvfb & ffmpeg is not an options due the resources constrains. Is there any other way that could let me capture packets or encoded video as a file? The solution must be working on Linux.
if the media stream is what you want a method is to override the browser's PeerConnection. Here is an example:
In an extension manifest add the following content script:
content_scripts": [
{
"matches": ["http://*/*", "https://*/*"],
"js": ["payload/inject.js"],
"all_frames": true,
"match_about_blank": true,
"run_at": "document_start"
}
]
inject.js
var inject = '('+function() {
//overide the browser's default RTCPeerConnection.
var origPeerConnection = window.RTCPeerConnection || window.webkitRTCPeerConnection || window.mozRTCPeerConnection;
//make sure it is supported
if (origPeerConnection) {
//our own RTCPeerConnection
var newPeerConnection = function(config, constraints) {
console.log('PeerConnection created with config', config);
//proxy the orginal peer connection
var pc = new origPeerConnection(config, constraints);
//store the old addStream
var oldAddStream = pc.addStream;
//addStream is called when a local stream is added.
//arguments[0] is a local media stream
pc.addStream = function() {
console.log("our add stream called!")
//our mediaStream object
console.dir(arguments[0])
return oldAddStream.apply(this, arguments);
}
//ontrack is called when a remote track is added.
//the media stream(s) are located in event.streams
pc.ontrack = function(event) {
console.log("ontrack got a track")
console.dir(event);
}
window.ourPC = pc;
return pc;
};
['RTCPeerConnection', 'webkitRTCPeerConnection', 'mozRTCPeerConnection'].forEach(function(obj) {
// Override objects if they exist in the window object
if (window.hasOwnProperty(obj)) {
window[obj] = newPeerConnection;
// Copy the static methods
Object.keys(origPeerConnection).forEach(function(x){
window[obj][x] = origPeerConnection[x];
})
window[obj].prototype = origPeerConnection.prototype;
}
});
}
}+')();';
var script = document.createElement('script');
script.textContent = inject;
(document.head||document.documentElement).appendChild(script);
script.parentNode.removeChild(script);
I tested this with a voice call in google hangouts and saw that two mediaStreams where added via pc.addStream and one track was added via pc.ontrack. addStream would seem to be local media streams and the event object in ontrack is a RTCTrackEvent which has a streams object. which I assume are what you are looking for.
To access these streams from your extenion's content script you will need to create audio elements and set the "srcObject" property to the media stream: e.g.
pc.ontrack = function(event) {
//check if our element exists
var elm = document.getElementById("remoteStream");
if(elm == null) {
//create an audio element
elm = document.createElement("audio");
elm.id = "remoteStream";
}
//set the srcObject to our stream. not sure if you need to clone it
elm.srcObject = event.streams[0].clone();
//write the elment to the body
document.body.appendChild(elm);
//fire a custom event so our content script knows the stream is available.
// you could pass the id in the "detail" object. for example:
//CustomEvent("remoteStreamAdded", {"detail":{"id":"audio_element_id"}})
//then access if via e.detail.id in your event listener.
var e = CustomEvent("remoteStreamAdded");
window.dispatchEvent(e);
}
Then in your content script you can listen for that event/access the mediastream like so:
window.addEventListener("remoteStreamAdded", function(e) {
elm = document.getElementById("remoteStream");
var stream = elm.captureStream();
})
With the capture stream available to your content script you can do pretty much anything you want with it. For example, MediaRecorder works really well for recording the stream(s) or you could use something like peer.js or maybe binary.js to stream to another source.
I haven't tested this but it should also be possible to override the local streams. For example, in the inject.js you could establish some blank mediastream, override navigator.mediaDevices.getUserMedia and instead of returning the local mediastream return your own mediastream.
This method should work in firefox and maybe others as well assuming you use an extenion/app to load the inject.js script at the start of the document. It being loaded before any of the target's libs is key to making this work.
edited for more detail
edited for even more detail
Capturing packets will only give you the network packets which you would then need to turn into frames and put into a container. A server such as Janus can record videos.
Running headless chrome and using the javascript MediaRecorder API is another option but much more heavy on resources.
Good afternoon.
At the moment I am trying to write the code in the "Main Activity" to send some waypoints to my IRIS drone but it is only working when the points are five. Could you check my code and give me suggestions about what is happening and how can I send more waypoints to my drone? I really appreciate your help because I am new developing in Android:
Code:
public void onBtnConnectTap3(View view) {
if (this.drone.isConnected()) {
this.drone.disconnect();
} else {
Spinner connectionSelector = (Spinner) findViewById(R.id.selectConnectionType);
int selectedConnectionType = connectionSelector.getSelectedItemPosition();
Bundle extraParams = new Bundle();
if (selectedConnectionType == ConnectionType.TYPE_USB) {
extraParams.putInt(ConnectionType.EXTRA_USB_BAUD_RATE, DEFAULT_USB_BAUD_RATE); // Set default baud rate to 57600
} else {
extraParams.putInt(ConnectionType.EXTRA_UDP_SERVER_PORT, DEFAULT_UDP_PORT); // Set default baud rate to 14550
}
ConnectionParameter connectionParams = new ConnectionParameter(selectedConnectionType, extraParams, null);
this.drone.connect(connectionParams);
}
currentMission = new Mission();
currentMission.clear();
for (int i = 1; i < 20; i++) {
waypoint2=new Waypoint();
yaw=new YawCondition();
waypoint2.setCoordinate(new LatLongAlt( i, i, i));
yaw.setAngle(i);
missionI3 = waypoint2;
currentMission.addMissionItem(missionI3);
missionI2=yaw;
currentMission.addMissionItem(missionI2);
}
this.drone.generateDronie();
this.drone.setMission(currentMission, true);
this.drone.arm(true);
}
Dependencies in Build.gradle:
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support:appcompat-v7:22.1.1'
compile 'com.o3dr.android:dronekit-android:2.3.11'
}
I would like to know if you also know where I can keep learning about how to develop apps in Android for 3DRobotics drones taking in consideration that my main sources are: http://android.dronekit.io/first_app.html and http://android.dronekit.io/javadoc/
Thanks in advance for your answer.
I'm not completely sure what you are trying to accomplish, but I see some possible errors in your code.
Use the latest of dronekit-android. The current version is 2.7.0. You can keep up to date on the versions here https://bintray.com/3drobotics/maven/dronekit-android/view
You are generating a mission with 38 items (19 waypoints, and 19 yaws). You are doing a very unsafe thing by setting waypoint coordinates to 1,1,1 ... 19,19,19. You vehicle will fly somewhere I assume you didn't intend.
I'm unsure why you have generateDronie(). As per the docs
Generate action to create a dronie mission, and upload it to the connected drone.
A dronie is a specific type mission that will fly a selfie path.
setMission() is correct. However, the last step in your code is to arm the vehicle. You will need to tell the drone to actually run the mission. You can do this with the startMission() method in the MissionApi class.
Be careful setting and starting mission with the same user interaction. There is always the chance that setMission() will fail to upload to the vehicle. If this is the case, startMission() will run the last mission that was successfully uploaded to the vehicle.
You can verify the upload succeeded by listening for the broadcast AttributeEvent.MISSION_SENT.
You can always contribute to the documentation by adding javadocs to APIs that you feel are missing or need clarification.
This issue has been bugging me since the inception of the new Google Drive Android Api (GDAA).
First discussed here, I hoped it would go away in later releases, but it is still there (as of 2014/03/19). The user-trashed (referring to the 'Remove' action in 'drive.google.com') files/folders keep appearing in both the
Drive.DriveApi.query(_gac, query), and
DriveFolder.queryChildren(_gac, query)
as well as
DriveFolder.listChildren(_gac)
methods, even if used with
Filters.eq(SearchableField.TRASHED, false)
query qualifier, or if I use a filtering construct on the results
for (Metadata md : result.getMetadataBuffer()) {
if ((md == null) || (!md.isDataValid()) || md.isTrashed()) continue;
dMDs.add(new DrvMD(md));
}
Using
Drive.DriveApi.requestSync(_gac);
has no impact. And the time elapsed since the removal varies wildly, my last case was over 12 HOURS. And it is completely random.
What's worse, I can't even rely on EMPTY TRASH in 'drive.google.com', it does not yield any predictable results. Sometime the file status changes to 'isTrashed()' sometimes it disappears from the result list.
As I kept fiddling with this issue, I ended up with the following superawfulhack:
find file with TRASH status equal FALSE
if (file found and is not trashed) {
try to write content
if ( write content fails)
create a new file
}
Not even this helps. The file shows up as healthy even if the file is in the trash (and it's status was double-filtered by query and by metadata test). It can even be happily written into and when inspected in the trash, it is modified.
The conclusion here is that a fix should get higher priority, since it renders multi-platform use of Drive unreliable. It will be discovered by developers right away in the development / debugging process, steering them away.
While waiting for any acknowledgement from the support team, I devised a HACK that allows a workaround for this problem. Using the same principle as in SO 22295903, the logic involves falling back to RESTful API. Basically, dropping the LIST / QUERY functionality of GDAA.
The high level logic is:
query the RESTful API to retrieve the ID/IDs of file(s) in question
use retrieved ID to get GDAA's DriveId via 'fetchDriveId()'
here are the code snippets to document the process:
1/ initialize both GDAA's 'GoogleApiClient' and RESTful's 'services.drive.Drive'
GoogleApiClient _gac;
com.google.api.services.drive.Drive _drvSvc;
void init(Context ctx, String email){
// build GDAA GoogleApiClient
_gac = new GoogleApiClient.Builder(ctx).addApi(com.google.android.gms.drive.Drive.API)
.addScope(com.google.android.gms.drive.Drive.SCOPE_FILE).setAccountName(email)
.addConnectionCallbacks(ctx).addOnConnectionFailedListener(ctx).build();
// build RESTFul (DriveSDKv2) service to fall back to
GoogleAccountCredential crd = GoogleAccountCredential
.usingOAuth2(ctx, Arrays.asList(com.google.api.services.drive.DriveScopes.DRIVE_FILE));
crd.setSelectedAccountName(email);
_drvSvc = new com.google.api.services.drive.Drive.Builder(
AndroidHttp.newCompatibleTransport(), new GsonFactory(), crd).build();
}
2/ method that queries the Drive RESTful API, returning GDAA's DriveId to be used by the app.
String qry = "title = 'MYFILE' and mimeType = 'text/plain' and trashed = false";
DriveId findObject(String qry) throws Exception {
DriveId dId = null;
try {
final FileList gLst = _drvSvc.files().list().setQ(query).setFields("items(id)").execute();
if (gLst.getItems().size() == 1) {
String sId = gLst.getItems().get(0).getId();
dId = Drive.DriveApi.fetchDriveId(_gac, sId).await().getDriveId();
} else if (gLst.getItems().size() > 1)
throw new Exception("more then one folder/file found");
} catch (Exception e) {}
return dId;
}
The findObject() method above (again I'm using the 'await()' flavor for simplicity) returns the the Drive objects correctly, reflecting the trashed status with no noticeable delay (implement in non-UI thread).
Again, I would strongly advice AGAINST leaving this in code longer than necassary since it is a HACK with unpredictable effect on the rest of the system.
I'm looking for a free tool or dlls that I can use to write my own code in .NET to process some web requests.
Let's say I have a URL with some query string parameters similar to http://www.example.com?param=1 and when I use it in a browser several redirects occur and eventually HTML is rendered that has a frameset and a frame's inner html contains a table with data that I need. I want to store this data in the external file in a CSV format. Obviously the data is different depending on the querystring parameter param. Let's say I want to run the application and generate 1000 CSV files for param values from 1 to 1000.
I have good knowledge in .NET, javascript, HTML, but the main problem is how to get the final HTML in the server code.
What I tried is I created a new Form Application, added a webbrowser control and used code like this:
private void FormMain_Shown(object sender, EventArgs e)
{
var param = 1; //test
var url = string.Format(Constants.URL_PATTERN, param);
WebBrowserMain.Navigated += WebBrowserMain_Navigated;
WebBrowserMain.Navigate(url);
}
void WebBrowserMain_Navigated(object sender, WebBrowserNavigatedEventArgs e)
{
if (e.Url.OriginalString == Constants.FINAL_URL)
{
var document = WebBrowserMain.Document.Window.Frames[0].Document;
}
}
But unfortunately I receieve unauthorizedaccessexception because probably frame and the document are in different domains. Does anybody has an idea of how to work around this and maybe another brand new approach to implement functionality like this?
Thanks to the Noseratio's comments I managed to do that with the WebBrowser control. Here are some major points that might help others who have similar questions:
1) DocumentCompleted event should be used. For Navigated event body of the document is NULL.
2) Following answer helped a lot: WebBrowserControl: UnauthorizedAccessException when accessing property of a Frame
3) I was not aware about IHTMLWindow2 similar interfaces, for them to work correctly I added references to following COM libs: Microsoft Internet Controls (SHDocVw), Microsoft HTML Object Library (MSHTML).
4) I grabbed the html of the frame with the following code:
void WebBrowserMain_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
if (e.Url.OriginalString == Constants.FINAL_URL)
{
try
{
var doc = (IHTMLDocument2) WebBrowserMain.Document.DomDocument;
var frame = (IHTMLWindow2) doc.frames.item(0);
var document = CrossFrameIE.GetDocumentFromWindow(frame);
var html = document.body.outerHTML;
var dataParser = new DataParser(html);
//my logic here
}
5) For the work with Html, I used the fine HTML Agility Pack that has some pretty good XPath search.