I have opened a revit document using Revit API 2014 from within Idling event handler. After that I am trying to activate 3D view but I am getting some exception (Setting active view is temporarily disabled). Is there any way to get around this exception?. Please refer the code below and the journal output. Thanks.
Note: Modelless dialog activates 3D view without any problem.
Code Snippet to activate 3D View:
Document doc = uiApp.ActiveUIDocument.Document;
FilteredElementCollector viewCollector = new FilteredElementCollector(doc);
ElementClassFilter viewFilter = new ElementClassFilter(typeof(Autodesk.Revit.DB.View3D));
viewCollector.WherePasses(viewFilter);
try
{
foreach (Autodesk.Revit.DB.View3D vw in viewCollector)
{
if (vw.IsValidObject && !vw.IsTemplate)
{
uiApp.ActiveUIDocument.ActiveView = vw;
break;
}
}
}
catch (Exception e)
{
throw e;
}
finally
{
viewCollector.Dispose();
viewFilter.Dispose();
}
Last few line of journal file:
' 1:< ::10:: Delta VM: Avail -27 -> 8384734 MB, Used +4 -> 437 MB; RAM: Avail -13 -> 3329 MB, Used +5 -> 528 MB
' C 07-Sep-2016 12:17:22.868; 1:< Exception in exportToObj() method :: Setting active view is temporarily disabled.
' at RevitCommandListener.RevitCommandListenerService.OpenAndActivate3DView(UIApplication uiApp)
' at RevitCommandListener.RevitCommandListenerService.exportToObj(UIApplication uiApp)
I would take the call to change the view out of the Idling event handler. Where else can you put it? Into some method that is called later on, and is not an Idling event handler. One possibility that comes to mind is to implement an external command X that sets the view and call PostCommand in the Idling event handler to launch X at a later point in time. Please let us know whether that works better. Thank you.
Related
I have started the migration process from vue2 to vue3. And now I have such a problem: the browser console shows some kind of warning and it is generated approximately 1000 times in 1 second. this causes the tab to hang. How can it be stopped?
setup() {
// your reactive properties
const rProp = ref('ref prop');
return {
// setup returns object that used by template for rendering
// if prop is absent you will see the warnings
rProp
}
}
I'm trying to build something using the processor node here. Almost anything I do in terms of debugging it crashes chrome. Specifically the tab. Whenever I bring up dev tools, and 100% of the time i put a breakpoint in the onaudioprocess node, the tab dies and I have to either find the chrome helper process for that tab or force quit chrome altogether to get started agin. Its basically crippled my development for the time being. Is this a known issue? Do I need to take certain precautions to prevent chrome from crashing? Are the real time aspects are the web audio api simply not debuggable?
Without seeing your code, it's a bit hard to diagnose the problem.
Does running this code snippet crash your browser tab?
let audioCtx = new (window.AudioContext || window.webkitAudioContext)();
function onPlay() {
let scriptProcessor = audioCtx.createScriptProcessor(4096, 2, 2);
scriptProcessor.onaudioprocess = onAudioProcess;
scriptProcessor.connect(audioCtx.destination);
let oscillator = audioCtx.createOscillator();
oscillator.type = "sawtooth";
oscillator.frequency.value = 220;
oscillator.connect(scriptProcessor);
oscillator.start();
}
function onAudioProcess(event) {
let { inputBuffer, outputBuffer } = event;
for (let channel = 0; channel < outputBuffer.numberOfChannels; channel++) {
let inputData = inputBuffer.getChannelData(channel);
let outputData = outputBuffer.getChannelData(channel);
for (let sample = 0; sample < inputBuffer.length; sample++) {
outputData[sample] = inputData[sample];
// Add white noise to oscillator.
outputData[sample] += ((Math.random() * 2) - 1) * 0.2;
// Un-comment the following line to crash the browser tab.
// console.log(sample);
}
}
}
<button type="button" onclick="onPlay()">Play</button>
If it crashes, there's something else in your local dev environment causing you problems, because it runs perfectly for me.
If not, then maybe you are doing a console.log() (or some other heavy operation) in your onaudioprocess event handler? Remember, this event handler processes thousands of audio samples every time it is called, so you need to be careful what you do with it. For example, try un-commenting the console.log() line in the code snippet above – your browser tab will crash.
Good afternoon.
At the moment I am trying to write the code in the "Main Activity" to send some waypoints to my IRIS drone but it is only working when the points are five. Could you check my code and give me suggestions about what is happening and how can I send more waypoints to my drone? I really appreciate your help because I am new developing in Android:
Code:
public void onBtnConnectTap3(View view) {
if (this.drone.isConnected()) {
this.drone.disconnect();
} else {
Spinner connectionSelector = (Spinner) findViewById(R.id.selectConnectionType);
int selectedConnectionType = connectionSelector.getSelectedItemPosition();
Bundle extraParams = new Bundle();
if (selectedConnectionType == ConnectionType.TYPE_USB) {
extraParams.putInt(ConnectionType.EXTRA_USB_BAUD_RATE, DEFAULT_USB_BAUD_RATE); // Set default baud rate to 57600
} else {
extraParams.putInt(ConnectionType.EXTRA_UDP_SERVER_PORT, DEFAULT_UDP_PORT); // Set default baud rate to 14550
}
ConnectionParameter connectionParams = new ConnectionParameter(selectedConnectionType, extraParams, null);
this.drone.connect(connectionParams);
}
currentMission = new Mission();
currentMission.clear();
for (int i = 1; i < 20; i++) {
waypoint2=new Waypoint();
yaw=new YawCondition();
waypoint2.setCoordinate(new LatLongAlt( i, i, i));
yaw.setAngle(i);
missionI3 = waypoint2;
currentMission.addMissionItem(missionI3);
missionI2=yaw;
currentMission.addMissionItem(missionI2);
}
this.drone.generateDronie();
this.drone.setMission(currentMission, true);
this.drone.arm(true);
}
Dependencies in Build.gradle:
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support:appcompat-v7:22.1.1'
compile 'com.o3dr.android:dronekit-android:2.3.11'
}
I would like to know if you also know where I can keep learning about how to develop apps in Android for 3DRobotics drones taking in consideration that my main sources are: http://android.dronekit.io/first_app.html and http://android.dronekit.io/javadoc/
Thanks in advance for your answer.
I'm not completely sure what you are trying to accomplish, but I see some possible errors in your code.
Use the latest of dronekit-android. The current version is 2.7.0. You can keep up to date on the versions here https://bintray.com/3drobotics/maven/dronekit-android/view
You are generating a mission with 38 items (19 waypoints, and 19 yaws). You are doing a very unsafe thing by setting waypoint coordinates to 1,1,1 ... 19,19,19. You vehicle will fly somewhere I assume you didn't intend.
I'm unsure why you have generateDronie(). As per the docs
Generate action to create a dronie mission, and upload it to the connected drone.
A dronie is a specific type mission that will fly a selfie path.
setMission() is correct. However, the last step in your code is to arm the vehicle. You will need to tell the drone to actually run the mission. You can do this with the startMission() method in the MissionApi class.
Be careful setting and starting mission with the same user interaction. There is always the chance that setMission() will fail to upload to the vehicle. If this is the case, startMission() will run the last mission that was successfully uploaded to the vehicle.
You can verify the upload succeeded by listening for the broadcast AttributeEvent.MISSION_SENT.
You can always contribute to the documentation by adding javadocs to APIs that you feel are missing or need clarification.
When flash pushed the 12.0.0.70 version to Chrome it broke my video recorder.
According to the patch notes here, one thing was changed that might have broken my flash-based recorder
[3689061] [Video] Resolves an issue injected in Flash Player
11.9.900.170 that caused the video buffer to no longer be filled if the buffer was emptied while playing an RTMP stream
My recorder breaks when it's time to stop the stream and save the video to the Adobe Media Server.
I tried debugging it with the 12.0.0.70 flash debugger, but it doesn't crash when I'm using the debugger. Only when using the non-debugger Chrome version does it crash.
I can't debug it and get any useful information out of my swf, apart from making a bunch of external calls to console.log to see where it fails.
If someone also ran into a similar issue with flash-based, media-server-connected webcam recorders, and can guess at what might fix my problem, I'd be grateful.
I'm building this swf with Flex 4.6.0
Here's the function that stops the video recorder.
public function doStop():void{
if(status=="paused"){
doResume();
}
rectColor.color=0x000000;
rectColor.alpha=1;
var timer:Timer=new Timer(1 * 10);
timer.addEventListener(TimerEvent.TIMER,function(e:TimerEvent):void{
timer.stop();
timer.reset();
myns.close();
myTimer.stop();
if(!thumbBeginning){
if(status=="recording"){
takeScreenShot();
}
}else{
if(status=="recording"){
recordingTime = formatTime(realTime);
recordingLength = myTimer.currentCount;
if(!redoFlag){
ExternalInterface.call("CTRecorder.stopOk");
myTime.text = formatTime(0);
VD1.attachCamera(myCam);
setState("ready");
status = "stopped, ready"
playbackTimer.reset();
msg(recordingTime);
recording=false;
pauseTime=0;
}else{
pauseTime=0;
myTime.text = formatTime(0);
VD1.attachCamera(myCam);
playbackTimer.reset();
msg(recordingTime);
recording=false;
}
}
if(shutterGroup.visible){
toggleShutter();
}
myTimer.stop();
myTimer.reset();
if(redoFlag){
doRecord();
redoFlag=false;
trace("redoFlag turned off");
}
}
rectColor.alpha=.5;
});
timer.start();
}
This isn't really an answer, but it's too long for a comment.
"I can't debug it" - it only breaks in the release version? Is the release version the pepper plugin (i.e. Chrome's version of Flash), and the debug is the NPAPI plugin (i.e. Adobe's version)?
A likely candidate for where it's breaking is the ExternalInterface.call("CTRecorder.stopOk"); call. Are you testing this locally, or remotely? If locally, then you might be running into this bug: https://code.google.com/p/chromium/issues/detail?id=137734 where the Flash <-> JS communication is broken because of Trusted locations being ignored in PPAPI flash. In any case, try installing the release NPAPI version of Flash and see does it still crash (you can verify which one is running by visiting chrome://plugins/)
To help debugging the release version, you need a logging system - instead of making trace() calls, you call a custom log() function, that, as well as trace()ing, also stores the message somewhere, like in an Array. Then, in your SWF, when you hit a certain key, show a TextField on the screen, and populate it with your log() messages. That way, you'll be able to see trace() statements in release mode.
Also, don't forget to listen to any relative error events and thrown exceptions - ExternalInterface.call() will throw an Error and SecurityError for example. You can also set the marshallExceptions property, which will pass ActionScript exceptions to the browser and JavaScript exceptions to the player: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/external/ExternalInterface.html#marshallExceptions
Finally, add a listener for the UncaughtErrorEvent.UNCAUGHT_ERROR event on your main class, which will catch any uncaught thrown errors (funnily enough), which will at least mean that your app doesn't collapse:
mainClass.loaderInfo.uncaughtErrorEvents.addEventListener( UncaughtErrorEvent.UNCAUGHT_ERROR, this._onUncaughtErrorEvent );
private function _onUncaughtErrorEvent( e:UncaughtErrorEvent ):void
{
var message:String = null;
var stackTrace:String = null;
// get the message
if ( e.error is Error )
{
message = ( e.error as Error ).message;
try { stackTrace = ( e.error as Error ).getStackTrace(); }
catch ( error:Error ) { stackTrace = "No stack trace"; }
}
else if ( e.error is ErrorEvent )
message = ( e.error as ErrorEvent ).text;
else
message = e.error.toString();
// show an alert
trace( "An uncaught exception has occurred: " + e.errorID + ": " + e.type + ": " + message + ", stack:\n" + stackTrace );
e.preventDefault();
}
I have captured 3 videos on my mobile which is by default stored on the phone gallery (Gallery/videos/). I have to play these 3 videos in one of my flex mobile application. How can I get the videos to the flex project? if I need to browse the mobile directory means kindly help me with some code to do so.
I too am looking for an answer to this question. Right now, based on other Stackoverflow discussions, exhaustive perusal of tutorials and Adobe documentation, and comments to both (often the more useful resource), I'm coming to the conclusion that it's not possible.
you can use CameraRoll.browseForImage() and open the iOS gallery of photos to see all entities of MediaType.IMAGE, but it will not show you MediaType.VIDEO
you can use CameraUI to launch the system camera by delegation and that returns a MediaPromise, but as far as I can tell, it does not save the video you capture anywhere, and I cannot find a way to access the captured video using the MediaPromise (at least using the Loader class)
Here's my code as a hint in that direction. The second code block is using the CameraRoll to browseForImage() but there is no browseForVideo() in the API.
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
if (CameraRoll.supportsBrowseForImage)
{
roll = new CameraRoll();
roll.addEventListener(MediaEvent.SELECT, cameraRollEventComplete);
roll.addEventListener(Event.CANCEL, cameraCanceled);
roll.addEventListener(ErrorEvent.ERROR, cameraError);
roll.browseForImage();
}
else
{
statusText.text = "Camera roll not supported on this device.";
startTimer();
}
I've since found that Videos captured using the delegated system camera are stored in a temporary storage location that iOS -DOES!- allow access to. (I was pleasantly shocked.)
The Captured video is not added to the device's Camera Roll as other videos captured using the iOS System Camera app, so it's not enough to capture video and expect to be able to access it later (if, for instance, CameraRoll.browseForVideo() is ever added to the API.
Therefore, you have to 'get while the getting is good' and move the file from the temporary storage location to some non-volatile location such as ApplicationStorageDirectory or the user's Documents directory (The only options in iOS I think).
The MediaPromise... I think... is completely useless for accessing the video via any direct progressive loader/streamer method, but still provides the location/url/path/filename of the temporary file so you can perform File operations on it.
Ironic that there are tutorials for getting around the lack of a file location/url/path/filename in the MediaPromise when using CameraRoll.browseForImage()... and that method is to use a loader class to load the image content (which you can then write out to a file), but when taking video, the video content is not accessible, and instead a file location/url/path/filename is provided. Ironic that there are nearly no resources I was able to find to help with this also. grumble
I'm going to include some code chunks w/o really editing them to strip out extraneous bits because it's way past when I need to be in bed, but I wanted you to have this. I may come clean it up later.
This section is in a Spark SkinnablePopUpContainer and I use the same click event for several buttons, thus the below 'case' is in the switch-case in that event handler function.
In case you are not familiar, the 'close(true, data)' is the method to close the SkinnablePopUpContainer, tell the parent/owner that the container was closed purposefully and that it should look for the data object being shared back (i.e., there are changes to be 'commit'ed).
case "cameraVideo":
{
if(CameraUI.isSupported)
{
camera = new CameraUI();
camera.addEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.addEventListener(Event.CANCEL, cameraCanceled);
camera.addEventListener(ErrorEvent.ERROR, cameraError);
camera.launch(MediaType.VIDEO);
}
else
{
statusText.text = "Camera not supported on this device.";
startTimer();
}
break;
}
protected function cameraCanceled(event:Event):void
{
statusText.text = "Camera access canceled by user.";
startTimer();
}
protected function cameraError(event:ErrorEvent):void
{
statusText.text = "There was an error while trying to use the camera.";
startTimer();
}
protected function videoMediaEventComplete(event:MediaEvent):void
{
statusText.text="Preparing captured video...";
camera.removeEventListener(MediaEvent.COMPLETE, videoMediaEventComplete);
camera.removeEventListener(Event.CANCEL, cameraCanceled);
camera.removeEventListener(ErrorEvent.ERROR, cameraError);
var media:MediaPromise = event.data;
data.MediaType = MediaType.VIDEO;
data.MediaPromise = media;
data.source = "camera video";
close(true,data)
}
This section is the Actionscript in the close handler of the parent/owner of the SkinnablePopUpContainer (truncated once the useful code is included)
private function choosePictureLightboxClosed(event:PopUpEvent):void
{
imageButtonsActive = false;
if(event.commit)
{
this.data = event.data as Object;
filters = new Array();
selection = true;
switch(data.MediaType)
{
case MediaType.VIDEO:
{
mediaType = "video";
trace(data.MediaPromise.file.url + " - " + data.MediaPromise.relativePath + " - " +data.MediaPromise.mediaType);
var sourceFile:File = new File(data.MediaPromise.file.url);
var destinationFile:File = File.applicationStorageDirectory.resolvePath("User" +parentApplication.userid);
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath("Videos");
if(destinationFile.exists && !destinationFile.isDirectory)
{
destinationFile.deleteFile();
}
destinationFile.createDirectory();
destinationFile = destinationFile.resolvePath(parentApplication.userid+"Video"+new Date().getTime()+".mov");
trace(destinationFile.nativePath);
sourceFile.moveTo(destinationFile,true);
break;
}
I sure do hope this helps. This has been a very frustrating (and costly in terms of our project being government grant funded and having deadlines we utterly failed to meet), and I very much hope that these hard-won solutions might help others avoid the same experience.