Posting Documents to OneNote via new REST API - windows-store-apps

For some reason, any document I upload to OneNote via the new REST API is corrupt when viewed from OneNote. Everything else is fine, but the file (for example a Word document) isn't clickable and if you try and open is shows as corrupt.
This is similar to what may happen when there is a problem with the byte array, or its in memory, but that doesn't seem to be the case. I use essentially the same process to upload the file bytes to SharePoint, OneDrive, etc. It's only to OneNote that the file seems to be corrupt.
Here is a simplified version of the C#
HttpRequestMessage createMessage = null;
HttpResponseMessage response = null;
using (var streamContent = new ByteArrayContent(fileBytes))
{
streamContent.Headers.ContentType = new MediaTypeHeaderValue("application/vnd.openxmlformats-officedocument.wordprocessingml.document");
streamContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("form-data");
streamContent.Headers.ContentDisposition.Name = fileName;
createMessage = new HttpRequestMessage(HttpMethod.Post, authorizationUrl)
{
Content = new MultipartFormDataContent
{
{
new StringContent(simpleHtml,
System.Text.Encoding.UTF8, "text/html"), "Presentation"
},
{streamContent}
}
};
response = await client.SendAsync(createMessage);
var stream = await response.Content.ReadAsStreamAsync();
successful = response.IsSuccessStatusCode;
}
Does anyone have any thoughts or working code uploading an actual binary document via the OneNote API via a Windows Store app?

The WinStore code sample contains a working example (method: CreatePageWithAttachedFile) of how to upload an attachment.
The slight differences I can think of between the above code snippet and the code sample are that the code sample uploads a pdf file (instead of a document) and the sample uses StreamContent (while the above code snippet uses ByteArrayContent).
I downloaded the code sample and locally modified it to use a document file and ByteArrayContent. I was able to upload the attachment and view it successfully. Used the following to get a byte array from a given stream:
using (BinaryReader br = new BinaryReader(stream))
{
byte[] b = br.ReadBytes(Convert.ToInt32(s.Length));
}
The rest of the code looks pretty similar to the above snippet and overall worked successfully for me.
Here are a few more things to consider while troubleshooting the issue:
Verify the attachment file itself isn't corrupt in the first place. for e.g. can it be opened without the OneNote API being in the mix?
Verify the API returned a 201 Http Status code back and the resulting page contains the attachment icon and allows downloading/viewing the attached file.

So, the issue was (strangely) the addition of the meta Content Type in the tag sent over in the HTML content that's not shown. The documentation refers to adding a type=[mime type] in the object tag, and since the WinStore example didn't do this (it only adds the mime type to the MediaTypeHeaderValue I removed it and it worked perfectly.
Just changing it to this worked:
<object data-attachment=\"" + fileName + "\" data=\"name:" + attachmentPartName + "\" />
Thanks for pointing me in the right direction with the sample code!

Related

Autodesk Forge - downloaded item has a different name

I'm using the https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey/objects/:objectName endpoint to download an item (a Revit model) from BIM 360. Using this documentation. The file gets downloaded fine and the contents are correct however, after downloading, the file name is the GUID of the file (4aac519c-ab91-42a5-85c5-f023c82d4736.rvt) , not the 'displayName' of the file (my file.rvt) . I'm getting the file name like so:
var headervalue = resp.Headers.FirstOrDefault(x => x.Name == "Content-Disposition")?.Value;
string contentDispositionString = Convert.ToString(headervalue);
ContentDisposition contentDisposition = new ContentDisposition(contentDispositionString);
fileName = contentDisposition.FileName;
I've used the same method on another project and it's working fine. The content and the file name of the file both are correct. However somehow the endpoint is behaving differently on this project.
Any pointers what could be the issue here?
I'm not sure if this is mentioned somewhere in the documentation but I don't think you should rely on the Content-Disposition of the response headers for this. If you want to get a filename for whichever object you're downloading, you should always get it from the actual item record (obtained in the 3rd step of the tutorial you linked to).

Snapchat download all memories at once

Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.
So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader
Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.
Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});

Edit on Google Docs without converting

I'm integrating my system with Google Drive. Everything is working so far, but one thing. I cannot edit the uploaded Word documents without converting them to Google Docs first.
I've read here it's possible using a Chrome plugin:
https://support.google.com/docs/answer/6055139?hl=en
But that's not my goal. I'm storing the file's information on my database and then I just request the proper URL for editing and previewing. Previewing is working fine, but when I try the edit URL it says the file does not exist. If I convert the file (using Google Drive's interface) and pass the new ID it works. I don't want to convert the user's documents to Google Drive because they still use Word as their main editing software.
Is there a way to accomplish this?
This is how I'm doing right now:
public static File UploadFile(FileInfo fileInfo, Stream stream, string googleAccount)
{
var mimetype = GetValidMimetype(fileInfo.MimeType);
var parentFolder = GetParentFolder(fileInfo);
var file = new File { Title = fileInfo.Title, MimeType = mimetype, Parents = parentFolder };
var uploadRequest = _service.Files.Insert(file, stream, mimetype);
uploadRequest.Upload();
file = uploadRequest.ResponseBody;
ShareFileWith(file.Id, googleAccount);
return file;
}
This is the URL for editing (where {0} is the file ID):
https://docs.google.com/document/d/{0}/edit?usp=drivesdk
I know that in order to convert the file I just need to:
uploadRequest.Convert = true;
But again, that's not what I want. Is it possible?
Thanks!
EDIT
Just an update. Convert = true should've worked but it's not. I've raised an issue for that here https://github.com/google/google-api-dotnet-client/issues/712
Bottomline, it only works if I open the file on Google Docs and then use its Id...

How to set description in the BoxFileUpload request?

I am using the Box windows V2 SDK to upload files to my Box account using the following code:
BoxFileRequest request = new BoxFileRequest()
{
Parent = new BoxRequestEntity() { Id = "0" },
Name = attachment.Name,
Description = "This is failing to be sent..."
};
client.FilesManager.UploadAsync(request, new MemoryStream(attachment.FileContent)).Result;
Uploading the file works great. However, I can not get the description field sent to the box server. Is it possible to upload a file with a description, or do I have to call FilesManager.UpdateInformationAsync after the file has been uploaded to accomplish this? It would be nice if this was an option so I could reduce the number of API calls..
The description must be set in a separate API request after uploading the file.
We have heard reusing some of the request objects may cause some confusion on what can be done with each request. We are evaluating whether or not this should be changed

Box Server file upload error while using a QT application coded using the new OAuth2 API

I have been working on a Box App using the API v2 for the past few days and have successfully authenticated using OAuth2.
My app retrieves the access token successfully and I'm also able to access my Box account using the access token, however an upload of a file fails with a response of 299.
The html response I see from Box after posting an upload request has the following message
"Sorry, we can't access that page."
Your Box account may be temporarily unavailable. We're working on resolving the issue and should be back up soon."
I take it all 2xx errors mean that the request has been accepted but the Box server cannot handle it.
Given below is a snippet of my code used to post the file.
Any tips on what could be wrong is appreciated
I am following instructions from
http://developers.box.com/get-started/#uploading-and-downloading
QUrl requrl = QUrl("https://www.box.com/api/2.0/files/content");
std::string token = m_acc_token;
QString hdrval = "Bearer "+QString(token.c_str());
QNetworkRequest qnr(requrl);
qnr.setRawHeader("Authorization",hdrval.toUtf8());
QString boundary;
boundary = "---------7d935033608e2";
QByteArray data;
data.append("file=#btest.txt");
data.append(boundary);
data.append("folder_id=0");
data.append(boundary);
qnr.setHeader(QNetworkRequest::ContentTypeHeader,"multipart/form-data; boundary=---------7d935033608e2");
qnr.setHeader(QNetworkRequest::ContentLengthHeader,data.size());
QNetworkReply* areply = NULL;
areply = m_networkManager->post(qnr,data);
you can implement it like
QHttpMultiPart *multiPart = new QHttpMultiPart(QHttpMultiPart::FormDataType);
QHttpPart headerPart;
headerPart.setHeader(QNetworkRequest::ContentDispositionHeader, QVariant("form-data; name=\"parent_id\" \" "));
headerPart.setBody(QString(aParentFolderId).toLatin1());
QHttpPart textPartData;
textPartData.setHeader(QNetworkRequest::ContentDispositionHeader, QVariant("form-data; filename=\"filename\" \" "));
textPartData.setBodyDevice(&File); //file must be open.
File.setParent(multiPart);
multiPart->append(headerPart);
multiPart->append(textPartData);
QNetworkRequest networkReq;
networkReq.setUrl(QUrl("https://upload.box.com/api/2.0/files/content"));
networkReq.setRawHeader("Authorization", "Bearer " + AccessToken.toLatin1());
networkReply = mNetworkAccessManager.post(networkReq, multiPart);
multiPart->setParent(networkReply);
The curl call in the Box API documentation can't be translated directly to code as you have done. the file=#btest.txt line on the command line puts the contents of file btest.text as the value of the parameter file.
Additionally, your multipart boundaries are malformed: they must end in \r\n; one must be present at the start of the multipart body, and another boundary with a slightly different format must be present as a final boundary. If you are interested in manually implementing the multipart form data, I'd recommend reading RFC 1876.
The Box API will return a 500 response if it is sent a malformed multipart POST body.
I'd recommend using QHttpMultiPart, for multipart form uploads, which is part of the Qt framework.