When calling
https://developer.api.autodesk.com/modelderivative/v2/designdata/{urn}/metadata/{guid}/properties
over IFC file, the response contains wrong information. Instead of returning the property name, it returns it's type. Example
"properties": {
"Item": {
"LcOaNode:LcOaNodeGuid": "c13f6c25-d776-584a-8b84-c7132760a018",
"LcOaNode:LcOaNodeHidden": 0,
"LcOaNode:LcOaNodeIcon": "File",
"LcOaNode:LcOaNodeMaterial": "",
"LcOaNode:LcOaNodeRequired": 0,
"LcOaNode:LcOaSceneBaseClassUserName": "File",
"LcOaNode:LcOaSceneBaseUserName": "3d337589-4cea-4301-a236-4b39c1e15ac9.Ifc",
"LcOaNode:LcOaUnit": "Millimeters"
},
"Material": {
"LcOaExMaterial:LcOaMaterialAmbient0": 1,
"LcOaExMaterial:LcOaMaterialAmbient1": 1,
"LcOaExMaterial:LcOaMaterialAmbient2": 1,
"LcOaExMaterial:LcOaMaterialDiffuse0": 1,
"LcOaExMaterial:LcOaMaterialDiffuse1": 1,
"LcOaExMaterial:LcOaMaterialDiffuse2": 1,
"LcOaExMaterial:LcOaMaterialEmissive0": 0,
"LcOaExMaterial:LcOaMaterialEmissive1": 0,
"LcOaExMaterial:LcOaMaterialEmissive2": 0,
"LcOaExMaterial:LcOaMaterialShininess": 0.00001,
"LcOaExMaterial:LcOaMaterialSpecular0": 0,
"LcOaExMaterial:LcOaMaterialSpecular1": 0,
"LcOaExMaterial:LcOaMaterialSpecular2": 0,
"LcOaExMaterial:LcOaMaterialTransparency": 0
}, ....
Where "LcOaExMaterial:LcOaMaterialAmbient0" is returned for example, it should be the property's name.
IFC files are extracted via Navisworks and this is an expected behaviour. For a given property, you can use the displayName (if available).
Related
Using ZappySys in SSIS to import JSON data from a 3rd party.
All working well, but sometimes the 3rd party doesn't include the same elements each time. Samples below:
The first works as it includes the expected Equipment and Catering cost elements that I have supplied in the Sample JSON string section of the "ZS JSON Parser Transform" screen
{
"RoomCost": {
"Net": 150,
"Tax": 0,
"Gross": 150
},
"CateringCost": {
"Net": 187.2,
"Tax": 0,
"Gross": 187.2
},
"EquipmentCost": {
"Net": 0,
"Tax": 0,
"Gross": 0
},
"Discount": {
"Net": 0,
"Tax": 0,
"Gross": 0
},
"Total": {
"Net": 337.2,
"Tax": 0,
"Gross": 337.2
}
}
The example below fails to import with NULL errors in the logs as the Equipment and Catering elements are not supplied. Error is
{
"RoomCost": {
"Net": 150,
"Tax": 0,
"Gross": 150
},
"Discount": {
"Net": 0,
"Tax": 0,
"Gross": 0
},
"Total": {
"Net": 337.2,
"Tax": 0,
"Gross": 337.2
}
}
Error: The type of the value (DBNull) being assigned to variable "User::mCateringCostGross" differs from the current variable type (Decimal). Variables may not change type during execution. Variable types are strict, except for variables of type Object.
What is the best way to handle this situation?
The fix for this was to include a Derived Column task between the ZappySys JSON source and the output, to convert nulls to default values.
I'm trying to store the JSON bytes to PostgreSQL, but there's a problem.
\u0000 cannot be converted to text.
As you can see below, the JSON contains escape sequences such as \u0000, which it seems PostgreSQL is interpreting as unicode characters, not JSON strings.
err := raws.SaveRawData(data, url)
// if there is "\u0000" in the bytes
if err.Error() == "ERROR: unsupported Unicode escape sequence (SQLSTATE 22P05)" {
// try to remove \u0000, but not work
data = bytes.Trim(data, "\u0000")
e := raws.SaveRawData(data, url) // save data again
if e != nil {
return e // return the same error
}
return nil
}
Origin API data can be access form Here. There is \u0000 in it:
{
"code": 0,
"message": "0",
"ttl": 1,
"data": {
"bvid": "BV1jb411C7m3",
"aid": 42443484,
"videos": 1,
"tid": 172,
"tname": "手机游戏",
"copyright": 1,
"pic": "http://i0.hdslb.com/bfs/archive/c76ee4798bf2ba0efc8449bcb3577d508321c6c5.jpg",
"title": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
"pubdate": 1549100438,
"ctime": 1549100438,
"desc": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
"desc_v2": [
{
"raw_text": "bgm:逮虾户\n今天先水一期冰塔的,明天再水\\u0000绿塔的,后天就可以下红莲啦,计划通嘿嘿嘿(º﹃º )",
"type": 1,
"biz_id": 0
}
],
"state": 0,
"duration": 265,
"rights": {
"bp": 0,
"elec": 0,
"download": 1,
"movie": 0,
"pay": 0,
"hd5": 0,
"no_reprint": 1,
"autoplay": 1,
"ugc_pay": 0,
"is_cooperation": 0,
"ugc_pay_preview": 0,
"no_background": 0,
"clean_mode": 0,
"is_stein_gate": 0
},
"owner": {
"mid": 39699039,
"name": "明眸-雅望",
"face": "http://i0.hdslb.com/bfs/face/240f74f8706955119575ea6c6cb1d31892f93800.jpg"
},
"stat": {
"aid": 42443484,
"view": 1107,
"danmaku": 7,
"reply": 22,
"favorite": 5,
"coin": 4,
"share": 0,
"now_rank": 0,
"his_rank": 0,
"like": 10,
"dislike": 0,
"evaluation": "",
"argue_msg": ""
},
"dynamic": "#崩坏3#",
"cid": 74479750,
"dimension": {
"width": 1280,
"height": 720,
"rotate": 0
},
"no_cache": false,
"pages": [
{
"cid": 74479750,
"page": 1,
"from": "vupload",
"part": "冰塔:我连你的大招都敢硬抗,所以告诉我谁才是生物女王?!单s冰塔怒砍档案女王巴德尔,谁,才是生物一姐?(手动滑稽)",
"duration": 265,
"vid": "",
"weblink": "",
"dimension": {
"width": 1280,
"height": 720,
"rotate": 0
}
}
],
"subtitle": {
"allow_submit": false,
"list": []
},
"user_garb": {
"url_image_ani_cut": ""
}
}
}
The struct for save is:
type RawJSONData struct {
ID uint64 `gorm:"primarykey" json:"id"`
CreatedAt time.Time `json:"-"`
DeletedAt gorm.DeletedAt `json:"-" gorm:"index"`
Data datatypes.JSON `json:"data"`
URL string `gorm:"index" json:"url"`
}
datatypes.JSON is from gorm.io/datatypes. It seems just is json.RawMessage, it is (extend from?) a []byte.
I use PostgreSQL's JSONB type for storage this data.
Table:
create table raw_json_data
(
id bigserial not null constraint raw_json_data_pke primary key,
created_at timestamp with time zone,
deleted_at timestamp with time zone,
data jsonb,
url text
);
The Unicode escape sequence \u0000 is simply not supported in Postgres TEXT and JSONB columns:
The jsonb type also rejects \u0000 (because that cannot be represented in PostgreSQL's text type)
You can change the column type to JSON:
create table Foo (test JSON);
insert into Foo (test) values ('{"text": "明天再水\u0000绿塔的"}');
-- works
The json data type stores an exact copy of the input text
This has the advantage of keeping the data identical to what you received from the API, in case the escape sequence has some meaning that you need to preserve.
It'll also allow you to query using Postgres JSON operators (e.g. ->>), albeit converting a JSON field with \u0000 to text will still fail:
select test->>'text' from Foo
-- ERROR: unsupported Unicode escape sequence
Columns of type BYTEA also accept any byte sequence without having to manipulate the data. In Gorm, use type:bytea tag:
type RawJSONData struct {
// ... other fields
Data string `gorm:"type:bytea" json:"data"`
}
If any of the above is not acceptable for you, then you must sanitize the input string...
I have a JSON column (called "roi") which contains users' Instagram performance. This is the roi column:
{
"data": {
"campaignName": "Master Cosy",
"currency": "GBP",
"reportData": {
"AAAAAAAAAA": {
"id": "0f20d833-d0f-bdb7-19",
"name": "cornish_gregorys",
"thumbnail": "https://sstagram.com/v/t51.2885-19/s320x320/87244862_1017848048596",
"Name": "cornisorys",
"instagramCount": 2319,
"instagramEngagementFactor": 0,
"instagramAuthorised": true,
"hasPosts": true,
"budget": 0,
"derivedFee": 0,
"inventoryItems": [],
"trackedAssetsStats": {
"totalAssets": 9,
"facebook": {
"count": 0
},
"instagram": {
"total": 9,
"stories": 9,
"carousels": 0,
"videos": 0,
"images": 0,
"igtvs": 0
},
"BBBBBBBBBBBBB": {
"id": "d3d30db4-0b453dfc3ae2a09",
"name": "itssdha",
"thumbnail": "https://in9809609728_n.jpg?_nc_ht=instagram.fhel5-1.fna.fbcdn.net&_nc_ohc=Se3ySAoqnFwAX4f6&oeF1623",
"Name": "itsshdha",
"instagramCount": 26700,
"instagramEngagementFactor": 0,
"instagramAuthorised": true,
"hasPosts": true,
"budget": 0,
"derivedFee": 0,
"inventoryItems": [],
"trackedAssetsStats": {
"totalAssets": 5,
"facebook": {
"count": 0
},
"instagram": {
"total": 9,
"stories": 9,
"carousels": 0,
"videos": 0,
"images": 0,
"igtvs": 0}, etc.....
After "reportData" I have the specific names of the users (in this case AAAAAAAA and BBBBBBBBB) and within them the performance of their Instagram accounts. How can I access all the metrics within the object username without having to type the specific username (AAAAAAAA and BBBBBBBBB)
My query is this:
roi -> 'date' -> 'reportData' -> 'AAAAAAA' -> 'instagramCount' -> etc ....
But I need something to 'jump' this part -> 'AAAAAAA' -> and go straight to the metrics, in this case 'instagramCount', etc...
From what I have read I may need to use jsonb_each, does anyone know how to use it?
demos:db<>fiddle
You have several ways.
Use jsonb_each() to expand all users' data: You can create a record per user and than ask for the count afterwards
SELECT
users.value -> 'instagramCount'
FROM
mytable,
jsonb_each(mydata -> 'data' -> 'reportData') as users
Since Postgres 12 you can use JSONpath for that, to achieve the same:
SELECT
jsonb_path_query(mydata, '$.**.instagramCount')
FROM mytable
I'm having only one node, I set the replica to 0 and shards to 1 by using below script:
PUT /my_temp_index
{
"settings": {
"number_of_shards" : 1,
"number_of_replicas" : 0
}
}
output:
{
"cluster_name": "KMT",
"status": "yellow",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"active_primary_shards": 452,
"active_shards": 452,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 451,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 50.055370985603545
}
should I have to restart elasticsearch after the changes?
You have only changed the number of replicas for a single index + the property names are wrong (missing the index. prefix).
You need to run the same query on all indices instead
PUT /*/_settings
{
"index": {
"number_of_replicas" : 0
}
}
How do I approach writing a query to return all the records matching both match.id and player.name for the following collection?
{
"match": {
"id": 1,
"event": {
"timestamp": "2015-06-03 15:02:22",
"event": "round_stats",
"round": 1,
"player": {
"name": "Jim",
"userId": 45,
"uniqueId": "BOT",
"team": 2
},
"shots": 0,
"hits": 0,
"kills": 0,
"headshots": 0,
"tks": 0,
"damage": 0,
"assists": 0,
"assists_tk": 0,
"deaths": 0,
"head": 0,
"chest": 0,
"stomach": 0,
"leftArm": 0,
"rightArm": 0,
"leftLeg": 0,
"rightLeg": 0,
"generic": 0
}
}
}
I've attempted it with both the following query statements, but had no luck -- they both return no results:
db.warmod_events.find( { $and: [ { "match.id": 1}, { "player.name": 'Jim' } ] } )
db.warmod_events.find( { $and: [ { "match.id": 1}, { "event": { "player.name": "Jim" } } ] } )
I'm pretty new to Mongo and any guidance and explanation would help a bunch -- truthfully I've chosen to use Mongo for this project as the data I am working with is already presented in this form (the JSON) and, due to that, it seemed like a good opportunity to use and learn Mongo.
I am referring to the documentation on the Mongo site currently.
Thanks all
Try the following query:
db.warmod_events.find({ "match.id": 1, "match.event.player.name": 'Jim' })
which will match documents where the match id is the same as the embedded document player name.