Download file from Google Drive to local folder from Google Apps Script - google-apps-script

I'm trying to download a specific *.csv file from my Google Drive to a local folder on my computer. I have tried the following with no luck:
ContentService.createTextOutput().downloadAsFile(fileName);
I don't get an error, and nothing appears to happen. Any ideas on what's wrong with my attempt?

ContentService is used to serve up text content as a Web Application. The line of code you've shown does nothing on it's own. Assuming that it's a one-line body of a doGet() function that you've deployed as a Web App, then here's why you're seeing nothing:
ContentService - use Content Service, and...
.createTextOutput() - create an empty text output object, then...
.downloadAsFile(fileName) - when a browser invokes our Get service, have it download the content (named fileName) rather than displaying it.
Since we have no content, there's nothing to download, so you see, well, nothing.
CSV Downloader Script
This script will get the text content of a csv file on your Google Drive, and serve it for downloading. Once you've saved a version of the script and published it as a web app, you can direct a browser to the published URL to start a download.
Depending on your browser settings, you may be able to select a specific local folder and/or change the file name. You have no control over that from the server side, where this script runs.
/**
* This function serves content for a script deployed as a web app.
* See https://developers.google.com/apps-script/execution_web_apps
*/
function doGet() {
var fileName = "test.csv"
return ContentService
.createTextOutput() // Create textOutput Object
.append(getCsvFile(fileName)) // Append the text from our csv file
.downloadAsFile(fileName); // Have browser download, rather than display
}
/**
* Return the text contained in the given csv file.
*/
function getCsvFile(fileName) {
var files = DocsList.getFiles();
var csvFile = "No Content";
for (var i = 0; i < files.length; i++) {
if (files[i].getName() == fileName) {
csvFile = files[i].getContentAsString();
break;
}
}
return csvFile
}

Related

Snapchat download all memories at once

Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.
So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader
Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.
Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});

How do I download information stored in my Chrome extension?

I am developing a Chrome extension where the workflow looks like:
user browses the internet and can save links. I have a strong preferences to store all links locally instead of, say, having to talk to an external server.
user can then hit a button in the extension which generates and downloads a csv file of all the saved links so far
Two questions:
What is the appropriate way to store this data over multiple sessions?
What is the appropriate way of generating the file and prompting a download?
For 1, I plan on using chrome.storage.local.
For 2, it's unclear what the best way is. I'm considering writing the data to options.html or popup.html, then calling chrome.downloads to download that page, but it feels like a massive hack.
What is the correct way of doing 1 and 2?
Using chrome.storage.local is the right way here.
I am using this snippet right from the popup in order to save text/json/csv files:
/**
* #param data {String} what to save
* #param extension {String} file extension
*/
function saveFile(data, extension = 'json') {
const fileName = `export-file.${extension}`;
const textFileAsBlob = new Blob([data], {type: 'text/plain'});
const downloadLink = document.createElement('a');
downloadLink.download = fileName;
downloadLink.href = window.URL.createObjectURL(textFileAsBlob);
downloadLink.target = '_blank';
downloadLink.click();
return fileName;
}
It will save a file to disk. And it is not a hacky way.
Update for #2
Another way is to pass base64 URL to the downloads API:
chrome.downloads.download({url: 'data:image/gif;base64,SEVMTE8gV09STEQh', filename: 'test.txt'})

Sending a .zip from URL through Gmail

Here's my code:
function myFunction() {
var url = "https://cdn-04.anonfile.com/t3ScWad7b9/fc36c282-1522257874/CRASH_FILES__2018.03.24.14.06.27_.zip";
var blob = UrlFetchApp.fetch(url).getBlob();
GmailApp.sendEmail("derekantrican#gmail.com", "", "", {attachments: [blob]});
}
As you can see, the function gets a file (a .zip) from the url and attaches it to an email that it then sends. The problem is that Google's servers are blocking the .zip:
"Learn more" leads here: https://support.google.com/mail/answer/6590?hl=en
This .zip (you can download from the URL yourself) only contains two .log files and a .xml file - none of which are banned on the url above.
I've also tried uploading to Google Drive first, then sending:
function myFunction(){
var url = "https://cdn-04.anonfile.com/t3ScWad7b9/fc36c282-1522257874/CRASH_FILES__2018.03.24.14.06.27_.zip";
var zipBlob = UrlFetchApp.fetch(url).getBlob();
zipBlob.setContentType("application/zip");
var file = DriveApp.createFile(zipBlob);
GmailApp.sendEmail("derekantrican#gmail.com", "", "", {attachments: [file.getBlob()]});
}
Same result. Any other suggestions?
Have you actually checked the contents of the 'zip' file that gets saved to your Google Drive? The issue is probably due to you attaching an HTML page, not the zip file. The link you provided is for the landing page, not the download itself, so the content of the page is exactly what is being served back when you call UrlFetchApp.fetch().
Here's what was saved to my Google Drive after sending a 'GET' request to your link:
The page requires a user to manually click on the button to initiate the download. There are no redirects, so you can't get the file by using this pattern:
UrlFetchApp.feth(url, {followRedirects: true});
The question is, can you get the actual link to the file? Well, kind of. In Chrome, open your downloads page by pressing Ctrl + J (Windows) or CMD + Shift + J (MacOS). The URLs displayed next to file names are the actual direct links to the files. However, these are very short-lived and expire within seconds.
You can grab the link and quickly paste it inside your code to make sure it works
var blob = UrlFetchApp.fetch(url).getBlob();
Logger.log(blob.getContentType()); //logs 'application/zip'
Logger.log(blob.getName()); // logs CRASH_FILES__2018.03.24.14.06.27_.zip
DriveApp.createFile(blob);
Result (note that it stops working after a few seconds as the new unique link is generated by the server):

Drive API - file or folder access timeout

I have a simple script to get files from Drive and print. When I run this it runs for about 400-500 seconds and I am getting timeout error.
I have two google accounts. Same script works in account-1(testing account) but not in account-2(main account).
Any help in isolating this issue would be more helpful.
Tried stepping through debugger the control comes back again and again to line-4 (file.hasNext()). How do I debug and make this script work?
My goal: to open specific file in the drive given a file name.
function myHelloWorld() {
Logger.log("Hello world\n");
var files = DriveApp.getFiles();
if (files.hasNext()) {
Logger.log("There are files\n");
}
}
Note:
1) Permission to run this script in both accounts were allowed (as part of first run).
2) No log messages were generated (View->Logs)
3) Timeout error was found in (View->Execution transcript)
Code works fine
#KarthikMS - there's nothing wrong with the code. It works perfectly fine from my app: it's debugging nicely - see here:
It's logging as expected
And you can also see the logs: it's running nicely too:
The code: do you want it to iterate?
I suspect that you want it to iterate through all the files? If so you need to use a while() statement and not an if statement. Then it will iterate through all the files. Something like this
while (files.hasNext()) {
var file = files.next();
Logger.log(file.getName());
}
Have you given this script permission to access your google drive?
You will need to authenticate the script to access your google drive. See here:
https://developers.google.com/apps-script/guides/services/authorization
I hope this helps?
It works okay:
Try it this way:
function myFiles()
{
var files = DriveApp.getFiles();
while(files.hasNext())
{
var fi=files.next();
Logger.log('FileName: %s',fi.getName());
}
}

Edit on Google Docs without converting

I'm integrating my system with Google Drive. Everything is working so far, but one thing. I cannot edit the uploaded Word documents without converting them to Google Docs first.
I've read here it's possible using a Chrome plugin:
https://support.google.com/docs/answer/6055139?hl=en
But that's not my goal. I'm storing the file's information on my database and then I just request the proper URL for editing and previewing. Previewing is working fine, but when I try the edit URL it says the file does not exist. If I convert the file (using Google Drive's interface) and pass the new ID it works. I don't want to convert the user's documents to Google Drive because they still use Word as their main editing software.
Is there a way to accomplish this?
This is how I'm doing right now:
public static File UploadFile(FileInfo fileInfo, Stream stream, string googleAccount)
{
var mimetype = GetValidMimetype(fileInfo.MimeType);
var parentFolder = GetParentFolder(fileInfo);
var file = new File { Title = fileInfo.Title, MimeType = mimetype, Parents = parentFolder };
var uploadRequest = _service.Files.Insert(file, stream, mimetype);
uploadRequest.Upload();
file = uploadRequest.ResponseBody;
ShareFileWith(file.Id, googleAccount);
return file;
}
This is the URL for editing (where {0} is the file ID):
https://docs.google.com/document/d/{0}/edit?usp=drivesdk
I know that in order to convert the file I just need to:
uploadRequest.Convert = true;
But again, that's not what I want. Is it possible?
Thanks!
EDIT
Just an update. Convert = true should've worked but it's not. I've raised an issue for that here https://github.com/google/google-api-dotnet-client/issues/712
Bottomline, it only works if I open the file on Google Docs and then use its Id...