OneNote Api - copyToNotebook hanging - onenote

The OneNote-Api recently started to hang on this call:
https://www.onenote.com/api/beta/me/notes/sections/{id}/copyToNotebook
Polling the result (as always) now returns the following
{
"#odata.context": "https://www.onenote.com/api/beta/$metadata#me/notes/operations/$entity",
"id": "copy-645387ea-eb06-4a0d-bcde-09d276e4e3d6fe0e14f6-3e53-421e-aa6c-8adcc998a4dd",
"status": "not started",
"createdDateTime": "2017-10-04T16:57:45.9599909Z",
"lastActionDateTime": "2017-10-04T16:57:45.9599909Z"
}
The lastActionDateTime never updates and the command doesn't complete despite returning the correct 202 code and subsequent 200 codes.
Any help would be appreciated (especially in a live working environment)!

You are calling the API correctly - we introduced a problem in our service a couple of days ago. This should be fixed now.
Thanks for reporting this. Feel free to report anything to us at: https://twitter.com/onenotedev?lang=en.

I performed the same operation and got back the same response. When checking my account to verify the copying worked I get back an error.
JSON object returned below. The copy file seems to be corrupted on OneNotes side.
{Responses: [[6,…]]}
Responses
:
[[6,…]]
0
:
[6,…]
0
:
6
1
:
{OperationId: 1, StatusCode: 126, RawCellStorageErrorCode: "InternalError.4", ServerPageStatsTrace: "",…}
AvailableFileAccess
:
1
CellId
:
"40c4a0be-3ff1-49c7-b169-ba9d74e0724c|1"
ContentBytes
:
0
ContextId
:
"null"
FileId
:
"WOPIsrc=https%3A%2F%2Fwopi%2Eonedrive%2Ecom%2Fwopi%2Ffiles%2F1438AEF7B187116%21122&access_token=4wfG%2D4rgaZ6xmefLfMpPpnlTOOxmBy0LNKwftRv4WQeE1YRkcD72ADoSfR%2DC2ZxSi11DuWIEGx0iSqZb8JP88aA36k2o8KKqF1hPyoFTblc3TyLt6k65eXe%2DL7QEcINnMhnvAA9aeuP5W2ttwIYE6dDJQVh9xkv5JcUndBG1d%5F3Ldp3%2DlcE3gNO7IEZtvzf7B0mUkKoerjtJr3OBKxsQzHx1PRCfh99BtCNPNvVVAq91thnpmeuVOATVGgWlMWcHTt29l9a8%2DrbHa3jknZWee6F6DBxU%5FzgW7YpWbZ4LtW1zOx33SQVm3XjRQ628TsgAV7%5Fy%2DJ4IvxCnGMVhpwvC%2DXLGnP35DAJW7LuetWKJ93B%5Fs&access_token_ttl=1510611198523"
OperationId
:
1
RawCellStorageErrorCode
:
"InternalError.4"
RevisionList
:
[]
RootCellId
:
"null"
ServerPageStatsTrace
:
""
StatusCode
:
126

Related

MongoDB, convert element data type from numeric string to number for a large collection (3kk) [duplicate]

This question already has answers here:
MongoDB CursorNotFound Error on collection.find() for a few hundred small records
(2 answers)
Closed 3 years ago.
I have a large 3kk mongodb collection for which i need to convert one element from numeric string to number.
I'm using a mongo-shell script which works for small 100k element collection, please see below the script:
db.SurName.find().forEach(function(tmp){
tmp.NUMBER = parseInt(tmp.NUMBER);
db.SurName.save(tmp);
})
But after a dozen minutes of work I got an error (the error occurs even if the collection is smaller like 1kk):
MongoDB Enterprise Test-shard-0:PRIMARY> db.SurName.find().forEach(function(tmp){
... tmp.NUMBER = parseInt(tmp.NUMBER);
... db.SurName.save(tmp);
... })
2020-01-18T16:59:21.173+0100 E QUERY [js] Error: command failed: {
"operationTime" : Timestamp(1579363161, 14),
"ok" : 0,
"errmsg" : "cursor id 4811116025485863761 not found",
"code" : 43,
"codeName" : "CursorNotFound",
"$clusterTime" : {
"clusterTime" : Timestamp(1579363161, 14),
"signature" : {
"hash" : BinData(0,"EemWWenbArSdh4dTFa0aNcfAPms="),
"keyId" : NumberLong("6748451824648323073")
}
}
} : getMore command failed: {
"operationTime" : Timestamp(1579363161, 14),
"ok" : 0,
"errmsg" : "cursor id 4811116025485863761 not found",
"code" : 43,
"codeName" : "CursorNotFound",
"$clusterTime" : {
"clusterTime" : Timestamp(1579363161, 14),
"signature" : {
"hash" : BinData(0,"EemWWenbArSdh4dTFa0aNcfAPms="),
"keyId" : NumberLong("6748451824648323073")
}
}
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
doassert#src/mongo/shell/assert.js:18:14
_assertCommandWorked#src/mongo/shell/assert.js:583:17
assert.commandWorked#src/mongo/shell/assert.js:673:16
DBCommandCursor.prototype._runGetMoreCommand#src/mongo/shell/query.js:802:5
DBCommandCursor.prototype._hasNextUsingCommands#src/mongo/shell/query.js:832:9
DBCommandCursor.prototype.hasNext#src/mongo/shell/query.js:840:16
DBQuery.prototype.hasNext#src/mongo/shell/query.js:288:13
DBQuery.prototype.forEach#src/mongo/shell/query.js:493:12
#(shell):1:1
Is there a way to do this better/right?
EDIT:
The obj schema:
{"_id":{"$oid":"5e241b98c7cab1382c7c9d95"},
"SURNAME":"KOWALSKA",
"SEX":"KOBIETA",
"TERYT":"0201011",
"NUMBER":"51",
"COMMUNES":"BOLESŁAWIEC",
"COUNTIES":"BOLESŁAWIECKI",
"PROVINCES":"DOLNOŚLĄSKIE"
}
The best and fast solution is to use mongodb aggregation with $out operator.
Equivalent to:
insert into new_table
select * from old_table
We convert NUMBER field with $toInt (MongoDB version >= 4.0) operator and store documents in the SurName2 collection. Once we have finished, we just drop old collection and rename SurName2 collection to SurName.
db.SurName.aggregate([
{$addFields:{
NUMBER : {$toInt:"$NUMBER"}
}},
{$out: "SurName2"}
])
Once you check everything is fine, execute these sentences:
db.SurName.drop()
db.SurName2.renameCollection("SurName")
** EDIT - START **
Googling "cursor id not found code 43", yielded this answer: https://stackoverflow.com/a/51602507/2279082
** EDIT - END **
I don't have your data set so I cannot test my answer very well. That being said, you can try to Update the specific field (see about update in the docs: db.collection.update)
So your script will look like this:
db.SurName.find({}, {NUMBER: 1}).forEach(function(tmp){
db.SurName.update({_id: tmp._id}, {$set: {NUMBER: parseInt(tmp.NUMBER)}});
})
Let me know if it helps or if needs an edit

Recommendation for storing and querying DataFactory run log?

I'd like to store and query the OUTPUT and ERROR data generated during a DataFactory run. The data is returned when calling Get-AzDataFactoryV2ActivityRun.
The intention is to use it to monitore possible pipeline execution error, duration, etc in a easy and fast way.
The data ressembles JSON format. What would be nice is to visualize the summary of each execution through some html. Should I store this log into a MongoDB?
Is there an easy and better way to centralize the log info of the multiple execution of different pipelines?
ResourceGroupName : Test
DataFactoryName : DFTest
ActivityRunId : 00000000-0000-0000-0000-000000000000
ActivityName : If Condition1
PipelineRunId : 00000000-0000-0000-0000-000000000000
PipelineName : Test
Input : {}
Output : {}
LinkedServiceName :
ActivityRunStart : 03/07/2019 11:27:21
ActivityRunEnd : 03/07/2019 11:27:21
DurationInMs : 000
Status : Succeeded
Error : {errorCode, message, failureType, target}
Activity 'Output' section:
"firstRow": {
"col1": 1
}
"effectiveIntegrationRuntime": "DefaultIntegrationRuntime (West Europe)"
This is probably not the best way how you can monitor your ADF pipelines.
Have you considered to use Azure Monitor?
Find out more:
- https://learn.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor
- https://learn.microsoft.com/en-us/azure/azure-monitor/visualizations

JSON error and Run script error

Apologies for any incorrect formatting I am new to these message boards
I am using HTML to customise a Textbox in Spotfire and although I get no errors in the Spotfire client I get the following error when I open the Spotfire analysis in Chrome
{ "titleFontColor":"#999999", "value":<SpotfireControl id="98bb1934a3c14ca2ab598deb672b8e44" />, "valueFontColor":"#010101", "symbol":"%", "min":<SpotfireControl id="0a05e2f1c4094273870d97a23b69efe2" />, "max":<SpotfireControl id="033f79eeccdb46ef982262c3c8a4ed0e" />, "humanFriendly":false, "humanFriendlyDecimal":2, "gaugeWidthScale":2.5, "gaugeColor":"#ebebeb", "label":"", "labelFontColor":"#b3b3b3", "shadowOpacity":0.2, "shadowSize":5, "shadowVerticalOffset":3, "levelColors":["#a9d70b","#f9c802","#ff0000"], "startAnimationTime":100, "startAnimationType":">", "donutStartAngle":90, "hideValue":true, "hideMinMax":true, "hideInnerShadow":false, "noGradient":false, "donut":true, "counter":false, "decimals":0, "formatNumber":false,
"customSectors": [{
"color" : "#D8181C",
"lo" : 0,
"hi" : 1
},{
"color" : "#F5CC0A",
"lo" : 1,
"hi" : 2
},{
"color" : "#50AF28",
"lo" : 2,
"hi" : 3
}] }
I get the following
Error - Unexpected token , in JSON at position 39.
I am very new to HTML and was wondering how I locate position 39 in order to try and find the error?
The second issue I am running into is that HTML displays the Donuts as I would expect in my Spotfire client but when I open the webplayer version the icons are not visible. I have to change tabs in the analysis and then return to the first page for the icons to appear. Is this likely to be an error in my HTML or an error elsewhere?
Thanks
In fact you have several problems,
-First you have to put all this code in one line otherwise "]" or "}" will make an error
-Second you have unexpected characters "<" and "/>" in your json you have to clean your json.
Then you can do in javascript
console.log(your_var)
or
console.log(JSON.parse(your_var))
it will transform in json you json (which is in string format)
We need sources to check what can be the problem.

REST API status as integer or as string?

Me and my colleague are working on REST API. We've been arguing quite a lot whether status of a resource/item should be a string or an integer---we both need to read, understand and modify this resource (using separate applications). As this is a very general subject, google did not help to settle this argument. I wonder what is your experience and which way is better.
For example, let's say we have Job resource, which is accesible through URI http://example.com/api/jobs/someid and it has the following JSON representation which is stored in NoSQL DB:
JOB A:
{
"id": "someid",
"name": "somename",
"status": "finished" // or "created", "failed", "compile_error"
}
So my question is - maybe it should be more like following?
JOB B:
{
"id": "someid",
"name": "somename",
"status": 0 // or 1, 2, 3, ...
}
In both cases each of us would have to create a map, that we use to make sense of status in our application logic. But I myself am leaning towards first one, as it is far more readable... You can also easily mix up '0' (string) and 0 (number).
However, as the API is consumed by machines, readability is not that important. Using numbers also has some other advantages - it is widely accepted when working with applications in console and can be beneficial when you want to include arbitrary new failed statuses, say:
status == 50 - means you have problem with network component X,
status > 100 - means some multiple special cases.
When you have numbers, you don't need to make up all those string names for them. So which way is best in you opinion? Maybe we need multiple fields (this could make matters a bit confusing):
JOB C:
{
"id": "someid",
"name": "somename",
"status": 0, // or 1, 2, 3...
"error_type": "compile_error",
"error_message": "You coding skill has failed. Please go away"
}
Personally I would look at handling this situation with a combination of both approaches you have mentioned. I would store the statuses as integers within a database, but would create an enumeration or class of constants to map status names to numeric status values.
For example (in C#):
public enum StatusType
{
Created = 0,
Failed = 1,
Compile_Error = 2,
// Add any further statuses here.
}
You could then convert the numeric status stored in the database to an instance of this enumeration, and use this for decision making throughout your code.
For example (in C#):
StatusType status = (StatusType) storedStatus;
if(status == StatusType.Created)
{
// Status is created.
}
else
{
// Handle any other statuses here.
}
If you're being pedantic, you could also store these mappings in your DB.
For access via an API, you could go either way depending on your requirements. You could even return a result with both the status number and status text:
object YourObject
{
status_code = 0,
status = "Failed"
}
You could also create an API to retrieve the status name from a code. However returning both the status code and name in the API would be the best from a performance standpoint.

MongoDB: how to select an empty-key subdocument?

Ahoy! I'm having a very funny issue with MongoDB and, possibly more in general, with JSON. Basically, I accidentally created some MongoDB documents whose subdocuments contain an empty key, e.g. (I stripped ObjectIDs to make the code look nicer):
{
"_id" : ObjectId("..."),
"stats" :
{
"violations" : 0,
"cost" : 170,
},
"parameters" :
{
"" : "../instances/comp/comp20.ectt",
"repetition" : 29,
"time" : 600000
},
"batch" : ObjectId("..."),
"system" : "Linux 3.5.0-27-generic",
"host" : "host3",
"date_started" : ISODate("2013-05-14T16:46:46.788Z"),
"date_stopped" : ISODate("2013-05-14T16:56:48.483Z"),
"copy" : false
}
Of course the problem is line:
"" : "../instances/comp/comp20.ectt"
since I cannot get back the value of the field. If I query using:
db.experiments.find({"batch": ObjectId("...")}, { "parameters.": 1 })
what I get is the full content of the parameters subdocument. My guess is that . is probably ignored if followed by an empty selector. From the JSON specification (15.12.*) it looks like empty keys are allowed. Do you have any ideas about how to solve that?
Is that a known behavior? Is there a use for that?
Update I tried to $rename the field, but that won't work, for the same reasons. Keys that end with . are not allowed.
Update filed issue on MongoDB issue tracker.
Thanks,
Tommaso
I have this same problem. You can select your sub-documents with something like this:
db.foo.find({"parameters.":{$exists:true}})
The dot at the end of "parameters" tells Mongo to look for an empty key in that sub-document. This works for me with Mongo 2.4.x.
Empty keys are not well supported by Mongo, I don't think they are officially supported, but you can insert data with them. So you shouldn't be using them and should find the place in your system where these keys are inserted and eliminate it.
I just checked the code and this does not currently seem possible for the reasons you mention. Since it is allowed to create documents with zero length field names I would consider this a bug. You can report it here : https://jira.mongodb.org
By the way, ironically you can query on it :
> db.c.save({a:{"":1}})
> db.c.save({a:{"":2}})
> db.c.find({"a.":1})
{ "_id" : ObjectId("519349da6bd8a34a4985520a"), "a" : { "" : 1 } }