Snapchat download all memories at once - google-chrome

Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.

So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader

Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.

Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});

Related

How do I download information stored in my Chrome extension?

I am developing a Chrome extension where the workflow looks like:
user browses the internet and can save links. I have a strong preferences to store all links locally instead of, say, having to talk to an external server.
user can then hit a button in the extension which generates and downloads a csv file of all the saved links so far
Two questions:
What is the appropriate way to store this data over multiple sessions?
What is the appropriate way of generating the file and prompting a download?
For 1, I plan on using chrome.storage.local.
For 2, it's unclear what the best way is. I'm considering writing the data to options.html or popup.html, then calling chrome.downloads to download that page, but it feels like a massive hack.
What is the correct way of doing 1 and 2?
Using chrome.storage.local is the right way here.
I am using this snippet right from the popup in order to save text/json/csv files:
/**
* #param data {String} what to save
* #param extension {String} file extension
*/
function saveFile(data, extension = 'json') {
const fileName = `export-file.${extension}`;
const textFileAsBlob = new Blob([data], {type: 'text/plain'});
const downloadLink = document.createElement('a');
downloadLink.download = fileName;
downloadLink.href = window.URL.createObjectURL(textFileAsBlob);
downloadLink.target = '_blank';
downloadLink.click();
return fileName;
}
It will save a file to disk. And it is not a hacky way.
Update for #2
Another way is to pass base64 URL to the downloads API:
chrome.downloads.download({url: 'data:image/gif;base64,SEVMTE8gV09STEQh', filename: 'test.txt'})

How to get "Coverage" data out from the Chrome Dev Tools

I am using the Coverage tab at my Chrome Dev Tools and I have a really big file and after playing a lot with Coverage it's clear enough that only 15% enough of my CSS code is being used (I simulated button presses, hover menus...).
The problem is getting hat 15% of code OUT of the Coverage tab. I cant believe the Devs behind this really nice feature didnt think an easy way for the end user copy only the green part of the code. Check image attached.
Do you have any idea how I could do that? I read something about using Puppeteers but it requires lots of preparation. On latest Canary version it looks like I can export a JSON but it would require some time to code a parser to that JSON in order to extract only the needed part.
Thanks to an article by Phillip Kriegel (https://www.philkrie.me/2018/07/04/extracting-coverage.html) I managed to setup Puppeteer to extract the coverage CSS from a URL and output that CSS into a file.
Here's how to do it:
Step 1: Install node.js globally
Step 2: Create a folder on your desktop
Step 3: Inside the folder install the Node Package Manager (NPM) and the Puppeteer node module
Step 4: Create a JavaScript file inside the folder, name it coverage.js
Step 5: Put this code inside that js file:
const puppeteer = require('puppeteer');
// Include to be able to export files w/ node
const fs = require('fs');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Begin collecting CSS coverage data
await Promise.all([
page.coverage.startCSSCoverage()
]);
// Visit desired page
await page.goto('https://www.google.com');
//Stop collection and retrieve the coverage iterator
const cssCoverage = await Promise.all([
page.coverage.stopCSSCoverage(),
]);
//Investigate CSS Coverage and Extract Used CSS
const css_coverage = [...cssCoverage];
let css_used_bytes = 0;
let css_total_bytes = 0;
let covered_css = "";
for (const entry of css_coverage[0]) {
css_total_bytes += entry.text.length;
console.log(`Total Bytes for ${entry.url}: ${entry.text.length}`);
for (const range of entry.ranges){
css_used_bytes += range.end - range.start - 1;
covered_css += entry.text.slice(range.start, range.end) + "\n";
}
}
console.log(`Total Bytes of CSS: ${css_total_bytes}`);
console.log(`Used Bytes of CSS: ${css_used_bytes}`);
fs.writeFile("./exported_css.css", covered_css, function(err) {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
});
await browser.close();
})();
Step 6: BE SURE TO REPLACE the URL at this point in the code await page.goto('https://www.google.com'); with your desired URL
Step 7: In the command line tool (Git Bash) type node coverage.js
A file called exported_css.css will be created, it will contain all your coverage CSS for the URL you set in the code.
CAVEAT: This will extract the coverage CSS from ALL the CSS assets that are loaded from the URL you set. You will then have to further optimize that CSS (not covered in this example).
Open Chrome Tab --> Inspect Element (F12) --> Press Escape button
I'm in the process of creating a PHP script that parses the Coverage JSON exported file, and outputs the extracted used CSS/JS only. Unfortunately I have come across a snag, at some point the JSON parser loses the correct character number, and I end up with broken or incorrect CSS/JS syntax. It's only off by a few characters, but the amount of characters that it is off by is variable so it's almost impossible to predict it during parsing.
I'm not positive, but I think the issue results from PHP running on the server, and the server may read the characters in the original CSS file differently than a browser would. I'm going to attempt to write a Coverage JSON parser in JavaScript. When I do I'll be sure to post the code here for all to use.
Sorry I couldn't be of more help, I just wanted to warn away people from using PHP to do this as it seems to not read character numbers correctly in large CSS files.

firebase Google Cloud storage download URL has folder name which becomes file name

We are using Firebase Google Cloud Storage Bucket to store our files.
When the logged in user wants the download the file kept inside certain folder
Eg: 123/admin/1469611803143/123.xlsx
The url generated will be
https://firebasestorage.googleapis.com/v0/b/MYWEBSITE.appspot.com/o/123%2Fadmin%2F1469611803143%2F123.xlsx?alt=media&token=whatever_alpa_numeric_token
As I download this file the file name will be 123%2Fadmin%2F1469611803143%2F123.xlsx
and not 123.xlsx
We have tried using download attribute to change the file name
but this did not change the file name to 123.xlsx
Please HELP
I'm pretty new with firebase but I achieved this with the following code :
var storageRef = firebase.storage().ref();
var child = storageRef.child("your path");
var uploadTask = child.put(<file>);
uploadTask.on(firebase.storage.TaskEvent.STATE_CHANGED,
function(snapshot){
// HANDLE TASK PROGRESS
},
function(error){
// HANDLE ERROR
},
function(){
// UPLOAD SUCCESSFULL
var newMetadata = {
contentDisposition : "attachment; filename=" + fileName
}
child.updateMetadata(newMetadata)
})
This is (fortunately or unfortunately) intended behavior. Technically, files in Firebase Storage are stored with the full path (so 123%2Fadmin%2F1469611803143%2F123.xlsx is actually the file name--the slashes and percent escaping are part of the name, and are only represented as path separators in the UI), which is how we get this behavior.
We're likely to modify how downloads work in the future (in that we'll truncate the name), but we've been busy fixing other bugs and polishing higher priority pieces.

Read file at startup Chrome extension/kiosk app

I'm currently developing my first Chrome app that we'll be used as a Kiosk app later.
I'm trying to read a file at the startup of the app, that file is a config file (.json). It contains values that will be passed inside a URL once the app has launched (ie: www.google.com/key=keyValueInTheJsonFile).
I used https://developer.chrome.com/apps/fileSystem (the method "chooseEntry" especially) to be able to read a file, but in my case I would like to directly specify the path/name of the file and not ask the user to select a file. Like that I can pass the values to the redirected URL at the startup.
Any idea of how I could possibly do that?
Thanks!
If your file is in the package you can read it using simple XHR or Fetch.
You can't use web filesystem since it has different purpose and Chrome filesystem (user's FS) won't work here either since it needs a user interaction.
Use function getURL to get a full URL to the resource and then make XHR call:
var rUrl = chrome.runtime.getURL('file.json');
fetch(rUrl).then((response) => {
return response.json();
})
.then((fileContent) => {
// the content
})
.catch((cause) => console.log(cause));

Edit on Google Docs without converting

I'm integrating my system with Google Drive. Everything is working so far, but one thing. I cannot edit the uploaded Word documents without converting them to Google Docs first.
I've read here it's possible using a Chrome plugin:
https://support.google.com/docs/answer/6055139?hl=en
But that's not my goal. I'm storing the file's information on my database and then I just request the proper URL for editing and previewing. Previewing is working fine, but when I try the edit URL it says the file does not exist. If I convert the file (using Google Drive's interface) and pass the new ID it works. I don't want to convert the user's documents to Google Drive because they still use Word as their main editing software.
Is there a way to accomplish this?
This is how I'm doing right now:
public static File UploadFile(FileInfo fileInfo, Stream stream, string googleAccount)
{
var mimetype = GetValidMimetype(fileInfo.MimeType);
var parentFolder = GetParentFolder(fileInfo);
var file = new File { Title = fileInfo.Title, MimeType = mimetype, Parents = parentFolder };
var uploadRequest = _service.Files.Insert(file, stream, mimetype);
uploadRequest.Upload();
file = uploadRequest.ResponseBody;
ShareFileWith(file.Id, googleAccount);
return file;
}
This is the URL for editing (where {0} is the file ID):
https://docs.google.com/document/d/{0}/edit?usp=drivesdk
I know that in order to convert the file I just need to:
uploadRequest.Convert = true;
But again, that's not what I want. Is it possible?
Thanks!
EDIT
Just an update. Convert = true should've worked but it's not. I've raised an issue for that here https://github.com/google/google-api-dotnet-client/issues/712
Bottomline, it only works if I open the file on Google Docs and then use its Id...