We are using the "Autodesk.BIM360.Extension.PushPin" extension inside the forge viewer to enable push pins.
When a push pin has been added to the model, we serialize the pushpin data and store it in our database. An example of such a pushpin is here:
{
"id": "12",
"label": "12",
"status": "quality_issues-not_approved",
"position": {
"x": 15.324803588519861,
"y": -10.150864635427533,
"z": -5.532972775562976
},
"type": "issues",
"objectId": 24518,
"externalId": "d9a1e318-14d0-4d08-b7ab-6d1c331454c2-002793d1",
"viewerState": {
"seedURN": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6MDQyY2QwMmUtNzU0Yi00ZDY2LTgyYTMtNjBmYjFlOWVjMjcxL2U5ODAxZTA4LTUwZjQtNDc0ZS05ZWU4LTAxYWQ0ZGM0ODFiYl9WMV9Lb25nZXN0aWVuKzMwKy0rVGlsYnlnbmluZystK0clMjVDMyUyNUE2bGRlbmRlK2QuKzA1LjA2LnJ2dA",
"objectSet": [{
"id": [],
"isolated": [],
"hidden": [],
"explodeScale": 0,
"idType": "lmv"
}],
"viewport": {
"name": "",
"eye": ["-15.17842530349136", "-0.9048862425583284", "0.6506974303790392"],
"target": ["-22.06049144652811", "0.915848677106827", "-0.4205110420886964"],
"up": [-0.14385076361076257, 0.038057482024001874, 0.9888673247056924],
"worldUpVector": [0, 0, 1],
"pivotPoint": ["-22.510046835506888", "1.6223793651751013", "3.668585646439837"],
"distanceToOrbit": 7.198985875545766,
"aspectRatio": 1.491792224702381,
"projection": "orthographic",
"isOrthographic": true,
"orthographicHeight": 7.198985875545767
},
"renderOptions": {
"environment": "Boardwalk",
"ambientOcclusion": {
"enabled": true,
"radius": 13.123359580052492,
"intensity": 1
},
"toneMap": {
"method": 1,
"exposure": -7,
"lightMultiplier": -1e-20
},
"appearance": {
"ghostHidden": true,
"ambientShadow": true,
"antiAliasing": true,
"progressiveDisplay": true,
"swapBlackAndWhite": false,
"displayLines": true,
"displayPoints": true
}
},
"cutplanes": [],
"globalOffset": {
"x": -20.808594999999997,
"y": 6.686511499999999,
"z": 8.456207
}
},
"objectData": {
"guid": "6de5f80c-73da-30ae-b2d1-8a78f177c2a4",
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6MDQyY2QwMmUtNzU0Yi00ZDY2LTgyYTMtNjBmYjFlOWVjMjcxL2U5ODAxZTA4LTUwZjQtNDc0ZS05ZWU4LTAxYWQ0ZGM0ODFiYl9WMV9Lb25nZXN0aWVuKzMwKy0rVGlsYnlnbmluZystK0clMjVDMyUyNUE2bGRlbmRlK2QuKzA1LjA2LnJ2dA",
"viewableId": "aaff5911-e8b1-4ae2-b41c-4284d0703eb4-00150218",
"viewName": "{3D}"
}
}
We then load the pushpin into the model again at a later point (when the user reopens the model), like this:
pushPinExtension.loadItems([pushPinItem]);
The result is that the pushpin is added in the model at the correct place, but the viewer state is incorrect. It seems like the viewer state for the pushpin is set to the viewer state of the model at the time when we load the pushpin - and not to the viewer state stored inside the pushpin.
Is this expected behaviour? - and if so, how do I use the viewer state from the pushpin instead?
why not explicitly load the viewer state stored in the pushpin separately after loading the pushpin:
pushPinExtension.loadItems([pushPinItem]);
viewer.restoreState(pushPinItem.viewerState)
EDIT:
Try restore the viewer state when an item is clicked - subscribe to the click event with:
viewer.restoreState(...)
//...
})
Related
I have a JSON file that I want to get 4 separate sets of markers from. I can get all of the markers to show up but I need to be able to call sets based on the TARGET_GROUP_NAME.
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"site_code": "S0001726",
"short_title": "Boddington Bauxite",
"stage": "Operating",
"lon": "116.455902",
"lat": "-32.931561",
"target_group_name": "BAUXITE - ALUMINA"
},
"geometry": {
"type": "Point",
"coordinates": [
116.455902,
-32.931561
]
}
},
The code I have currently is:
map.on('load', () => {
map.addSource('mines', {
type: 'geojson',
data: 'https://raw.githubusercontent.com/ParkaviMathi/Project_3/main/static/clean_mines.geojson',
});
map.addLayer({
'id': 'mines-layer',
'type': 'circle',
'source': 'mines',
'paint': {
'circle-radius': 4,
'circle-stroke-width': 2,
'circle-color': 'red',
'circle-stroke-color': 'white'
}
});
Any direction is greatly appreciated.
EDIT: TLDR there is currently no way to know material names on the fragment level.
I want to read the materials from fragments of a node and change their materials according to a map that uses the Revit material names as keys.
I have the following "Materials and Finishes" properties from a node in the model (retrieved via Viewer3D):
And I have the following THREE materials from the fragments of that node:
Is there a way to set the names of the THREE materials to match the model data (or use them at all)?
Ideally I would be able to match these THREE materials with the following materials extracted from this node:
The SVF file format (generated by the Model Derivative service and loaded by Forge Viewer) does not preserve material names unfortunately. The fragments are simply associated with a specific material based on its index in the list.
The "Materials and Finishes" data is basically just a property specific to the original file (in this case a Revit model), and it may not be available in other file formats.
EDIT: I tried looking into the Materials.json.gz file, and unfortunately the names are not included there, either:
{
"name": "LMVTK Simple Materials",
"version": "1.0",
"scene": {
"SceneUnit": 8215,
"YIsUp": 0
},
"materials": {
"0": {
"version": 2,
"userassets": ["0"],
"materials": {
"0": {
"tag": "",
"proteinType": "",
"definition": "SimplePhong",
"properties": {
"integers": {
"mode": 4
},
"booleans": {
"color_by_object": false,
"generic_is_metal": false,
"generic_backface_cull": true
},
"scalars": {
"generic_transparency": {
"units": "",
"values": [0]
}
},
"colors": {
"generic_diffuse": {
"values": [{
"r": 0,
"g": 1,
"b": 0,
"a": 1
}]
}
}
},
"transparent": false,
"textures": {
}
}
}
},
"1": {
"version": 2,
"userassets": ["0"],
"materials": {
"0": {
"tag": "",
"proteinType": "",
"definition": "SimplePhong",
"properties": {
"integers": {
"mode": 4
},
"booleans": {
"color_by_object": false,
"generic_is_metal": false,
"generic_backface_cull": true
},
"scalars": {
"generic_transparency": {
"units": "",
"values": [0]
}
},
"colors": {
"generic_diffuse": {
"values": [{
"r": 0.400000,
"g": 0.400000,
"b": 0.400000,
"a": 1
}]
}
}
},
"transparent": false,
"textures": {
}
}
}
}
...
}
I have a MP4 video with SRT captions and I need to transcode them with media convert. In media convert I set automatic ABR and I specified the SRT origin path.
At the moment, I have tested the following:
I set SRT file in one output and video/audio in another
I set SRT, video and audio in the same output
For the first test, the job finish successfully, but on the S3 bucket there isnt any .SRT file. For the second test, the job fails with "aption destination type [SRT] requires a raw muxer." message
This is my JSON for the first test
{
"Queue": "arn:aws:mediaconvert:us-east-1:{{ACCOUNT-NUMBER}}:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::{{ACCOUNT-NUMBER}}:role/{{MY-ROLE-NAME}}",
"Settings": {
"TimecodeConfig": {
"Source": "ZEROBASED"
},
"OutputGroups": [
{
"Name": "DASH ISO",
"Outputs": [
{
"ContainerSettings": {
"Container": "MPD"
},
"VideoDescription": {
"ScalingBehavior": "DEFAULT",
"TimecodeInsertion": "DISABLED",
"AntiAlias": "ENABLED",
"Sharpness": 50,
"CodecSettings": {
"Codec": "H_264",
"H264Settings": {
"InterlaceMode": "PROGRESSIVE",
"ScanTypeConversionMode": "INTERLACED",
"NumberReferenceFrames": 3,
"Syntax": "DEFAULT",
"Softness": 0,
"GopClosedCadence": 1,
"GopSize": 90,
"Slices": 1,
"GopBReference": "DISABLED",
"SlowPal": "DISABLED",
"EntropyEncoding": "CABAC",
"FramerateControl": "INITIALIZE_FROM_SOURCE",
"RateControlMode": "QVBR",
"CodecProfile": "MAIN",
"Telecine": "NONE",
"MinIInterval": 0,
"AdaptiveQuantization": "AUTO",
"CodecLevel": "AUTO",
"FieldEncoding": "PAFF",
"SceneChangeDetect": "ENABLED",
"QualityTuningLevel": "MULTI_PASS_HQ",
"FramerateConversionAlgorithm": "DUPLICATE_DROP",
"UnregisteredSeiTimecode": "DISABLED",
"GopSizeUnits": "FRAMES",
"ParControl": "INITIALIZE_FROM_SOURCE",
"NumberBFramesBetweenReferenceFrames": 2,
"RepeatPps": "DISABLED",
"DynamicSubGop": "STATIC"
}
},
"AfdSignaling": "NONE",
"DropFrameTimecode": "ENABLED",
"RespondToAfd": "NONE",
"ColorMetadata": "INSERT"
},
"AudioDescriptions": [
{
"AudioTypeControl": "FOLLOW_INPUT",
"AudioSourceName": "Audio Selector 1",
"CodecSettings": {
"Codec": "AAC",
"AacSettings": {
"AudioDescriptionBroadcasterMix": "NORMAL",
"Bitrate": 96000,
"RateControlMode": "CBR",
"CodecProfile": "LC",
"CodingMode": "CODING_MODE_2_0",
"RawFormat": "NONE",
"SampleRate": 48000,
"Specification": "MPEG4"
}
},
"StreamName": "latino",
"LanguageCodeControl": "FOLLOW_INPUT",
"LanguageCode": "SPA"
}
]
},
{
"ContainerSettings": {
"Container": "MPD"
},
"CaptionDescriptions": [
{
"CaptionSelectorName": "Captions Selector 1",
"DestinationSettings": {
"DestinationType": "SRT"
},
"LanguageCode": "SPA",
"LanguageDescription": "latino"
}
]
}
],
"OutputGroupSettings": {
"Type": "DASH_ISO_GROUP_SETTINGS",
"DashIsoGroupSettings": {
"SegmentLength": 30,
"MinFinalSegmentLength": 0,
"Destination": "s3://{{BUCKET-NAME}}/streaming15/dash-iso/",
"FragmentLength": 2,
"SegmentControl": "SINGLE_FILE",
"MpdProfile": "ON_DEMAND_PROFILE",
"HbbtvCompliance": "NONE"
}
},
"AutomatedEncodingSettings": {
"AbrSettings": {
"MaxAbrBitrate": 8000000,
"MinAbrBitrate": 600000
}
}
}
],
"AdAvailOffset": 0,
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"Offset": 0,
"DefaultSelection": "DEFAULT",
"ProgramSelection": 1
}
},
"VideoSelector": {
"ColorSpace": "FOLLOW",
"Rotate": "DEGREE_0",
"AlphaBehavior": "DISCARD"
},
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"InputScanType": "AUTO",
"TimecodeSource": "ZEROBASED",
"CaptionSelectors": {
"Captions Selector 1": {
"SourceSettings": {
"SourceType": "SRT",
"FileSourceSettings": {
"SourceFile": "s3://{{BUCKET-NAME}}/PROMO_CAP_01.srt"
}
}
}
},
"FileInput": "s3://{{BUCKET-NAME}}/PROMO_CAP_01.mp4"
}
]
},
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_60",
"Priority": 0
}
What I am missing?
According to the AWS Elemental MediaConvert user guide, SRT is not a supported output for a DASH-ISO output group when the input caption type is SRT.
Here's a link to that guide (reference page 176):
https://docs.aws.amazon.com/mediaconvert/latest/ug/mediaconvert-guide.pdf
The supported caption outputs for SRT input in DASH-ISO are:
Burn in
IMSC (as sidecar .fmp4)
IMSC (as sidecar .xml)
TTML (as sidecar .fmp4)
TTML (as sidecar .ttml)
Additionally, there is a gap in the documentation. SRT->DASH-ISO+WebVTT is supported, even though it is not listed. The documentation will be corrected, but I wanted to share that with you in case it helps.
If you must send SRT to the output destination, then you could create a separate output group where the caption is in a track with no container (see pages 192-196 in the document).
I am having some issue while creating a job for AWS Elemental media convert.
I have followed the following sequence.
1.) Create a new job
2.) Add input and configurations
3.) Add File output group and configure destination settings
4.) Under Output change Container to No Container
5.) Under Output remove Audio
6.) Under Output -> Video change Codec to JPEG to Frame Capture
7.) Configure frame rate (rate which captures will be produced (more notes and examples below))
8.) Configure max capture settings
I got the following error:
Job_contains_the_following_error:
/outputGroups: Should not match the schema
Here is my job JSON:
{
"Settings": {
"AdAvailOffset": 0,
"Inputs": [
{
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"TimecodeSource": "EMBEDDED",
"VideoSelector": {
"ColorSpace": "FOLLOW",
"Rotate": "DEGREE_0"
},
"AudioSelectors": {
"Audio Selector 1": {
"Offset": 0,
"DefaultSelection": "DEFAULT",
"ProgramSelection": 1
}
},
"FileInput": "s3://field-live-user-data/udariyan.mp4"
}
],
"OutputGroups": [
{
"Name": "File Group",
"OutputGroupSettings": {
"Type": "FILE_GROUP_SETTINGS",
"FileGroupSettings": {
"Destination": "s3://field-live-user-data/"
}
},
"Outputs": [
{
"VideoDescription": {
"ScalingBehavior": "DEFAULT",
"TimecodeInsertion": "DISABLED",
"AntiAlias": "ENABLED",
"Sharpness": 50,
"CodecSettings": {
"Codec": "FRAME_CAPTURE",
"FrameCaptureSettings": {
"FramerateNumerator": 30,
"FramerateDenominator": 100,
"MaxCaptures": 2,
"Quality": 80
}
},
"DropFrameTimecode": "ENABLED",
"ColorMetadata": "INSERT",
"Width": 1280,
"Height": 720
},
"ContainerSettings": {
"Container": "RAW"
},
"Extension": "jpg"
}
],
"CustomName": "customGroup"
}
]
},
"Queue": "arn:aws:mediaconvert:us-east-1:469030323850:queues/Default",
"Role": "arn:aws:iam::469030323850:role/myMediaConverter"
}
Currently, you can't have a job template with frame capture only:
AWS Dev forums on this topic
I'm unable to successfully display a model+texture map WITHOUT the viewer applying lighting effects to it.
I am using a localised version of the viewer to investigate the problem but I welcome an Autodesk Material Library setting solution if it exists.
This is an example of how I want to see the material, i.e. no specular, no reflections (ignore the fact this example is in three.js): https://stemkoski.github.io/Three.js/Texture-Repeat.html
This is an example of my problem: https://myhub.autodesk360.com/ue29c31db/g/shares/SHabee1QT1a327cf2b7a7879b97973545818?viewState=NoIgbgDAdAjCA0IBGMAsBmATAMwKYBMBaCAQwHYBjQ1fATlUNt13UO1pIwDYZMAOTCVogAukA
I have attempted many different "Autodesk Material Library" settings, including ramping up "Self Illumination" however, the texture either fails to load and/or, that glossy shine persists.
Could the Materials.json be tweaked to fix this problem?
This is my Materials.json
{
"name": "LMVTK Simple Materials",
"version": "1.0",
"scene": {
"SceneUnit": 8214,
"YIsUp": 2
},
"materials": {
"0": {
"version": 2,
"userassets": ["0"],
"materials": {
"0": {
"tag": "0",
"proteinType": "",
"definition": "SimplePhong",
"properties": {
"integers": {
"mode": 4
},
"booleans": {
"color_by_object": false,
"generic_is_metal": false,
"generic_backface_cull": false
},
"scalars": {
"generic_transparency": {
"units": "",
"values": [0]
}
},
"colors": {
"generic_diffuse": {
"values": [{
"r": 1,
"g": 1,
"b": 1,
"a": 1
}]
}
}
},
"transparent": false,
"textures": {
"generic_diffuse": {
"connections": ["1_generic_diffuse"]
}
}
},
"1_generic_diffuse": {
"tag": "0",
"definition": "UnifiedBitmap",
"properties": {
"scalars": {
"unifiedbitmap_RGBAmount": {
"units": "",
"values": [1]
}
},
"uris": {
"unifiedbitmap_Bitmap": {
"values": ["image0.jpg"]
}
},
"booleans": {
"texture_URepeat": true,
"texture_VRepeat": true,
"unifiedbitmap_Invert": false
},
"integers": {
"texture_MapChannel": 1
}
}
}
}
}
}
}
I recommend trying this approach. Literally bypass the LMV materials (which are all effected by lighting) and use a custom THREE material that is not effected by lighting. You may possibly need to create a custom shader too.
Start with this...
https://forge.autodesk.com/cloud_and_mobile/2016/02/custom-transparent-meshes-with-view-data-api.html
Let me know if that fixes the problem, and if not I can deep dive into it further.
Best, Michael