Parse error. Expecting '}', ',', got 'EOF' - json

I'm looking for errors in a data file because it's making a game to get stucked and not start. I'm absolutely ignorant in programming, so don't kill me if this is senseless.
The error (JSONLint):
Error: Parse error on line 6782:
...cSpeciesCached": 0.0
-----------------------^
Expecting '}', ',', got 'EOF'
The last part of the file:
"facilities": [{
"cityID": "Qk3s7KAzLwUz7v0KQjUdCZyW0X2x9rZc",
"constructionType": 0,
"constructionTimer": {
"repeatMax": 7200,
"timeInterval": 4680,
"hasFired": true,
"repeats": false,
"shouldCatchUp": true,
"shouldStop": true,
"iterationThreshold": 2340,
"iterations": 19849
},
"upgradeTimer": {
"repeatMax": 7200,
"timeInterval": 3600,
"hasFired": true,
"repeats": false,
"shouldCatchUp": true,
"shouldStop": true,
"iterationThreshold": 1800,
"iterations": 3606
},
"facilityID": "HA17AvUZYVNrGIW64FrOOzx5eIXyX2vG",
"type": 33,
"level": 2,
"isDisabled": false,
"isOnMoon": true,
"heatCached": 0.0,
"airPressureCached": 0.0,
"oxygenContentCached": 0.0,
"seaLevelCached": 0.0,
"biomassCached": 0.0,
"aquaticSpeciesCached": 0.0
The whole file is 6782 lines long, I think this is too big for finding an error, is it?

Related

XTB API JSON cannot modify existing trade position

I am using a xAPI Playground for testing, link here:
https://playground.xapi.pro/
I want to edit/modify existing position with command: tradeTransaction
Documentation says to modify existing position I should use "type" as 3 and "cmd" should match existing position (0 for BUY and 1 for SELL)
{
"command": "tradeTransaction",
"arguments": {
"tradeTransInfo": {
"cmd": 1,
"customComment": "Some text",
"expiration": 0,
"order": order_number_as_int ,
"price": open_price_as_double,
"sl": my_double_value,
"tp": my_another_double_value,
"symbol": "f.e. OIL.WTI",
"type": 3,
"volume": 0.01
}
}
}
Error code
{
"status": false,
"errorCode": "SE199",
"errorDescr": "Internal error"
}
All possible data collected from API about existing position:
{'cmd': 1, 'order': 474325736, 'digits': 2, 'offset': 0, 'order2': 474325838, 'position': 474325736, 'symbol': 'OIL.WTI', 'comment': '', 'customComment': '', 'commission': 0.0, 'storage': 0.0, 'margin_rate': 0.0, 'close_price': 76.65, 'open_price': 76.57, 'nominalValue': 0.0, 'profit': -3.56, 'volume': 0.01, 'sl': 80.0, 'tp': 70.0, 'closed': False, 'timestamp': 1676665564666, 'spread': 0, 'taxes': 0.0, 'open_time': 1676663063081, 'open_timeString': 'Fri Feb 17 20:44:23 CET 2023', 'close_time': None, 'close_timeString': None, 'expiration': None, 'expirationString': None},
API documentation is here:
http://developers.xstore.pro/documentation/#tradeTransaction
Ofc I tried every possible value in "cmd" and "type" but it does not help.
Error code sometimes are diffrent, f. e:
{
"command": "tradeTransaction",
"arguments": {
"tradeTransInfo": {
"cmd": 3,
"customComment": "Some text",
"expiration": 0,
"order": 474325838,
"price": 0,
"sl": 0,
"tp": 0,
"symbol": "OIL.WTI",
"type": 3,
"volume": 0.01
}
}
}
Error code:
{
"status": false,
"errorCode": "BE4",
"errorDescr": "remaining nominal must be greater than zero"
}
Any ideas what I can do wrong?
I am in touch with XTB support, still waiting for response.
Thanks in advance for any help!

Error loading files from Multiatlas in Phaser3

Trying to use the multiatlas feature in Phaser and TexturePacker.
Getting this error:
VM32201:1 GET http://localhost:8080/bg-sd.json 404 (Not Found)
Texture.js:250 Texture.frame missing: 1/1.png
The JSON file actually resides at http://localhost:8080/dist/img/bg-sd.json and I can browse to it. I can also browse to http://localhost:8080/dist/img/bg-1-sd.png.
I'm loading the atlas like:
scene.load.multiatlas({
key: 'bg-sd',
atlasURL: 'dist/img/bg-sd.json',
baseURL: 'dist/img'
});
The 1/1.png frame is also in the file:
{
"textures": [
{
"image": "bg-1-sd.png",
"format": "RGBA8888",
"size": {
"w": 1924,
"h": 2039
},
"scale": 0.5,
"frames": [
{
"filename": "1/1.png",
"rotated": false,
"trimmed": false,
"sourceSize": {
"w": 960,
"h": 540
},
"spriteSourceSize": {
"x": 0,
"y": 0,
"w": 960,
"h": 540
},
"frame": {
"x": 1,
"y": 1,
"w": 960,
"h": 540
}
},
I've tried various combinations of the path and baseURL settings but it will not load the file from dist/img.
I think providing both a baseURL and an atlasURL might be conflicting. The baseURL is attached in front of the atlasURL value so you're probably loading something like dist/img/dist/img/bg-sd.json.
Have you tried without the configuration object, like:
this.load.multiatlas('bd-sd', './dist/img/bg-sd.json');

AWS DMS CDC failing to replicate text column from MySQL

We are using AWS DMS to replicate data from MySQL to Aurora MySQL with ongoing replication. We setup task validation to ensure source to target replication was successful. One of the tables has a text column which fails validation for ongoing replication. The tasks replicates the column properly during full load but during ongoing replication, the target column is always empty string.
The schemas for the source and target are identical.
Below are the task settings for the replication task.
{
"TargetMetadata": {
"TargetSchema": "mydb",
"SupportLobs": true,
"FullLobMode": true,
"LobChunkSize": 64,
"LimitedSizeLobMode": false,
"LobMaxSize": 0,
"InlineLobMaxSize": 0,
"LoadMaxFileSize": 0,
"ParallelLoadThreads": 0,
"ParallelLoadBufferSize": 0,
"BatchApplyEnabled": false,
"TaskRecoveryTableEnabled": false
},
"FullLoadSettings": {
"TargetTablePrepMode": "DO_NOTHING",
"CreatePkAfterFullLoad": false,
"StopTaskCachedChangesApplied": false,
"StopTaskCachedChangesNotApplied": false,
"MaxFullLoadSubTasks": 8,
"TransactionConsistencyTimeout": 600,
"CommitRate": 10000
},
"Logging": {
"EnableLogging": true,
"LogComponents": [
{
"Id": "SOURCE_UNLOAD",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "SOURCE_CAPTURE",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "TARGET_LOAD",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "TARGET_APPLY",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "TASK_MANAGER",
"Severity": "LOGGER_SEVERITY_DEFAULT"
}
],
"CloudWatchLogGroup": "dms-tasks-test-dms-replication-instance",
"CloudWatchLogStream": "dms-task"
},
"ControlTablesSettings": {
"historyTimeslotInMinutes": 5,
"ControlSchema": "",
"HistoryTimeslotInMinutes": 5,
"HistoryTableEnabled": false,
"SuspendedTablesTableEnabled": false,
"StatusTableEnabled": false
},
"StreamBufferSettings": {
"StreamBufferCount": 3,
"StreamBufferSizeInMB": 8,
"CtrlStreamBufferSizeInMB": 5
},
"ChangeProcessingDdlHandlingPolicy": {
"HandleSourceTableDropped": true,
"HandleSourceTableTruncated": true,
"HandleSourceTableAltered": true
},
"ErrorBehavior": {
"DataErrorPolicy": "LOG_ERROR",
"DataTruncationErrorPolicy": "LOG_ERROR",
"DataErrorEscalationPolicy": "SUSPEND_TABLE",
"DataErrorEscalationCount": 50,
"TableErrorPolicy": "SUSPEND_TABLE",
"TableErrorEscalationPolicy": "STOP_TASK",
"TableErrorEscalationCount": 50,
"RecoverableErrorCount": 0,
"RecoverableErrorInterval": 5,
"RecoverableErrorThrottling": true,
"RecoverableErrorThrottlingMax": 1800,
"ApplyErrorDeletePolicy": "IGNORE_RECORD",
"ApplyErrorInsertPolicy": "LOG_ERROR",
"ApplyErrorUpdatePolicy": "LOG_ERROR",
"ApplyErrorEscalationPolicy": "LOG_ERROR",
"ApplyErrorEscalationCount": 0,
"ApplyErrorFailOnTruncationDdl": false,
"FullLoadIgnoreConflicts": true,
"FailOnTransactionConsistencyBreached": false,
"FailOnNoTablesCaptured": false
},
"ChangeProcessingTuning": {
"BatchApplyPreserveTransaction": true,
"BatchApplyTimeoutMin": 1,
"BatchApplyTimeoutMax": 30,
"BatchApplyMemoryLimit": 500,
"BatchSplitSize": 0,
"MinTransactionSize": 1000,
"CommitTimeout": 1,
"MemoryLimitTotal": 1024,
"MemoryKeepTime": 60,
"StatementCacheSize": 50
},
"PostProcessingRules": null,
"CharacterSetSettings": null,
"LoopbackPreventionSettings": null
}
I have been having issue finding anything on this. Does anyone have suggestions on how to troubleshoot or fix?

Select multiple fields using jq

I am trying to filter docker-machine's output using the following jq filter.
docker-machine inspect default | jq '{ConfigVersion, .Driver.{MachineName, CPU, Memory}, DriverName}'
The original json for the first command is here
{
"ConfigVersion": 3,
"Driver": {
"IPAddress": "192.168.99.100",
"MachineName": "default",
"SSHUser": "docker",
"SSHPort": 52314,
"SSHKeyPath": "/Users/apatil/.docker/machine/machines/default/id_rsa",
"StorePath": "/Users/apatil/.docker/machine",
"SwarmMaster": false,
"SwarmHost": "tcp://0.0.0.0:3376",
"SwarmDiscovery": "",
"VBoxManager": {},
"HostInterfaces": {},
"CPU": 2,
"Memory": 5120,
"DiskSize": 20000,
"NatNicType": "82540EM",
"Boot2DockerURL": "",
"Boot2DockerImportVM": "",
"HostDNSResolver": false,
"HostOnlyCIDR": "192.168.99.1/24",
"HostOnlyNicType": "82540EM",
"HostOnlyPromiscMode": "deny",
"UIType": "headless",
"HostOnlyNoDHCP": false,
"NoShare": false,
"DNSProxy": true,
"NoVTXCheck": false,
"ShareFolder": ""
},
"DriverName": "virtualbox",
"HostOptions": {
"Driver": "",
"Memory": 0,
"Disk": 0,
"EngineOptions": {
"ArbitraryFlags": [],
"Dns": null,
"GraphDir": "",
"Env": [],
"Ipv6": false,
"InsecureRegistry": [],
"Labels": [],
"LogLevel": "",
"StorageDriver": "",
"SelinuxEnabled": false,
"TlsVerify": true,
"RegistryMirror": [],
"InstallURL": "https://get.docker.com"
},
"SwarmOptions": {
"IsSwarm": false,
"Address": "",
"Discovery": "",
"Agent": false,
"Master": false,
"Host": "tcp://0.0.0.0:3376",
"Image": "swarm:latest",
"Strategy": "spread",
"Heartbeat": 0,
"Overcommit": 0,
"ArbitraryFlags": [],
"ArbitraryJoinFlags": [],
"Env": null,
"IsExperimental": false
},
"AuthOptions": {
"CertDir": "/Users/apatil/.docker/machine/certs",
"CaCertPath": "/Users/apatil/.docker/machine/certs/ca.pem",
"CaPrivateKeyPath": "/Users/apatil/.docker/machine/certs/ca-key.pem",
"CaCertRemotePath": "",
"ServerCertPath": "/Users/apatil/.docker/machine/machines/default/server.pem",
"ServerKeyPath": "/Users/apatil/.docker/machine/machines/default/server-key.pem",
"ClientKeyPath": "/Users/apatil/.docker/machine/certs/key.pem",
"ServerCertRemotePath": "",
"ServerKeyRemotePath": "",
"ClientCertPath": "/Users/apatil/.docker/machine/certs/cert.pem",
"ServerCertSANs": [],
"StorePath": "/Users/apatil/.docker/machine/machines/default"
}
},
"Name": "default"
}
I am getting the following error from jq for the command above
$ docker-machine inspect default | jq '{ConfigVersion, .Driver.{MachineName, CPU, Memory}, DriverName}'
jq: error: syntax error, unexpected FIELD (Unix shell quoting issues?) at <top-level>, line 1:
{ConfigVersion, .Driver.{MachineName, CPU, Memory}, DriverName}
jq: error: syntax error, unexpected '}', expecting $end (Unix shell quoting issues?) at <top-level>, line 1:
{ConfigVersion, .Driver.{MachineName, CPU, Memory}, DriverName}
jq: 2 compile errors
Fixed it using
$ docker-machine inspect default |
jq '{ConfigVersion,
Driver: (.Driver|{MachineName, CPU, Memory}),
DriverName}'
{
"ConfigVersion": 3,
"Driver": {
"MachineName": "default",
"CPU": 2,
"Memory": 5120
},
"DriverName": "virtualbox"
}

Loading a json file using python resulting in an error

Goof mourning,
When I am trying to load a json file into a mongodb, I am getting the following error:
raise ValueError("No JSON object could be decoded").
In my opinion, my problem come from my second field, but I do not know how to make change "" into a name, or delete it before to load it.
My json file is :
{
"_id" : "585a9ecec62747d1e19497a5",
"" : NumberInt(0),
"VendorID" : NumberInt(2),
"lpep_pickup_datetime" : "2015-11-01 00:57:34",
"Lpep_dropoff_datetime" : "2015-11-01 23:57:45",
"Store_and_fwd_flag" : "N",
"RateCodeID" : NumberInt(5),
"Pickup_longitude" : -73.9550857544,
"Pickup_latitude" : 40.6637229919,
"Dropoff_longitude" : -73.958984375,
"Dropoff_latitude" : 40.6634483337,
"Passenger_count" : NumberInt(1),
"Trip_distance" : 0.09,
"Fare_amount" : 15.0,
"Extra" : 0.0,
"MTA_tax" : 0.0,
"Tip_amount" : 0.0,
"Tolls_amount" : 0.0,
"Ehail_fee" : "",
"improvement_surcharge" : 0.0,
"Total_amount" : 15.0,
"Payment_type" : NumberInt(2),
"Trip_type" : NumberInt(2),
"x" : -8232642.48775,
"y" : 4962866.701,
"valid_longitude" : NumberInt(1),
"valid_latitude" : NumberInt(1),
"valid_coordinates" : NumberInt(2)
}
The problem in your JSON file is not the empty-string key (that is allowed), but the occurrences of NumberInt(...): this is not valid in JSON. You need to provide the number without wrapping it in some kind of function.
So this will be valid:
{
"_id": "585a9ecec62747d1e19497a5",
"": 0,
"VendorID": 2,
"lpep_pickup_datetime": "2015-11-01 00:57:34",
"Lpep_dropoff_datetime": "2015-11-01 23:57:45",
"Store_and_fwd_flag": "N",
"RateCodeID": 5,
"Pickup_longitude": -73.9550857544,
"Pickup_latitude": 40.6637229919,
"Dropoff_longitude": -73.958984375,
"Dropoff_latitude": 40.6634483337,
"Passenger_count": 1,
"Trip_distance": 0.09,
"Fare_amount": 15.0,
"Extra": 0.0,
"MTA_tax": 0.0,
"Tip_amount": 0.0,
"Tolls_amount": 0.0,
"Ehail_fee": "",
"improvement_surcharge": 0.0,
"Total_amount": 15.0,
"Payment_type": 2,
"Trip_type": 2,
"x": -8232642.48775,
"y": 4962866.701,
"valid_longitude": 1,
"valid_latitude": 1,
"valid_coordinates": 2
}
If you have no control over the non-JSON file, then after reading the file contents replace the occurrences of NumberInt like this (in Python):
import re
json = re.sub(r"NumberInt\((\d+)\)", r"\1", json)