I am working on a scheduling/planning program in OCaml and I want to be able to use an iCal file as an input, but I can't figure out how to parse the file into my own calendar type in OCaml. Ideally, I want to be able to read an iCal file in the same way that you can read a json file using Yojson. Any ideas for how I could accomplish this?
If you're talking about the ICalendar format then there is the OCaml library icalendar that can read it. You can install it with
opam install icalendar
It is pretty undocumented so here is an example program that will read and print back a calendar.
open Format
let read filename =
let buf = Buffer.create 4096 in
let src = open_in filename in
let rec loop () = loop (Buffer.add_channel buf src 4096) in
try loop () with End_of_file -> close_in src; Buffer.contents buf
let main filename =
match Icalendar.parse (read filename) with
| Error failure ->
eprintf "Failed to read file %s %s#\n%!" filename failure
| Ok calendar ->
printf "%a#\n%!" Icalendar.pp calendar
let () = main Sys.argv.(1)
Note, that I also had to write the read function that will read the whole file into a string. This function is not a part of the standard library but is commonly provided by other libraries, e.g., Base, Core, Batteries.
To build the program, create an empty folder, put the code into a file, e.g., example.ml and then issue the following command in that folder:
ocamlbuild -pkg icalendar example.native
You can then use the built binary as
./example.native input.ics
where input.ics is the sample input.
Related
I am trying to read a csv file in a firebase function so that I can process the file and do the rest operations using the data.
import * as csv from "csvtojson";
const csvFilePath = "<gdrive shared link>"
try{
console.log("First Method...")
csv()
.fromFile(csvFilePath)
.then((jsonObj: any)=>{
console.log("jsonObj....",JSON.stringify(jsonObj));
})
console.log("Second Method...")
const jsonArray=await csv().fromFile(csvFilePath);
console.log("jsonArray...", JSON.stringify(jsonArray))
}
catch(e){
console.log("error",JSON.stringify(e))
}
The above mentioned are the 2 methods I have tried for reading the csv but both shows the firebase error
'Error: File does not exist. Check to make sure the file path to your csv is correct.'
In case of 'csvFilePath' I have tried 2 methods
Just added the csv file in same folder of the function and added the code like
const csvFilePath = "./student.csv"
Added the same file to google drive and changed the access permissions to anyone with the link can read and edit and given the path to same
const csvFilePath = "<gdrive shared link>"
Both shows the same error. In case of google drive I don't want to use any sort of google credential because I was intented to read a simple csv file in firebase function.
I will start by proposing that you convert your csv to json locally or without the function and see if it works. This is because I see you are using ES6 imports which might be causing an issue since all the documentation uses require. You can also try CSV Parse or some solutions provided in this question as an alternative, trying them without the function to check if it actually works and discard it. Actually, you can upload the JSON once you have converted it from the csv, but that depends on what you are trying to do.
I think the best way to achieve this, is following the approach given in this question, that first uploads the file into cloud storage and using onFinalize() to trigger the conversion.
Also, will address these three questions that went through similar issues with the path. They were able to fix it by adding __dirname. Each one has some extra useful information.
Context for "relative paths" seems to change to the calling module if a module is imported
The csvtojson converter ignores my file name and just puts undefined
How to avoid the error which throws a csvtojson
Can we open gz file with Tcl_FSOpenFileChannel api https://linux.die.net/man/3/tcl_fsopenfilechannel
You can open the file and see the compressed data within it.
Decompression is done by stacking on a decompressor, likely gunzip for GZ-format data. The API for attaching one to a Tcl channel is currently only a Tcl script-level API; using it requires registering the channel (Tcl_RegisterChannel) to let Tcl scripts in that interpreter see that channel, and then running zlib push gzip to do the registration.
Tcl_Channel chan = Tcl_FSOpenFileChannel(interp, theFileNameObj, "rb", 0);
// Should check for error here (NULL == chan), of course
Tcl_RegisterChannel(interp, chan);
char buffer[128]; // plenty of space; channel names aren't *that* long
sprintf(buffer, "zlib push gunzip %s", Tcl_GetChannelName(chan));
Tcl_Eval(interp, buffer);
// Ought to check result of Tcl_Eval for TCL_ERROR
// Use the file here
Tcl_Close(NULL, chan);
You can use the Tcl zlib support library functions to do decompression (there's both a bulk and a streaming API described on that page), but attaching to a channel isn't one of the options. (I've added a ticket to remind someone to make this nicer.)
Over the years on snapchat I have saved lots of photos that I would like to retrieve now, The problem is they do not make it easy to export, but luckily if you go online you can request all the data (thats great)
I can see all my photos download link and using the local HTML file if I click download it starts downloading.
Here's where the tricky part is, I have around 15,000 downloads I need to do and manually clicking each individual one will take ages, I've tried extracting all of the links through the download button and this creates lots of Urls (Great) but the problem is, if you past the url into the browser then ("Error: HTTP method GET is not supported by this URL") appears.
I've tried a multitude of different chrome extensions and none of them show the actually download, just the HTML which is on the left-hand side.
The download button is a clickable link that just starts the download in the tab. It belongs under Href A
I'm trying to figure out what the best way of bulk downloading each of these individual files is.
So, I just watched their code by downloading my own memories. They use a custom JavaScript function to download your data (a POST request with ID's in the body).
You can replicate this request, but you can also just use their method.
Open your console and use downloadMemories(<url>)
Or if you don't have the urls you can retrieve them yourself:
var links = document.getElementsByTagName("table")[0].getElementsByTagName("a");
eval(links[0].href);
UPDATE
I made a script for this:
https://github.com/ToTheMax/Snapchat-All-Memories-Downloader
Using the .json file you can download them one by one with python:
req = requests.post(url, allow_redirects=True)
response = req.text
file = requests.get(response)
Then get the correct extension and the date:
day = date.split(" ")[0]
time = date.split(" ")[1].replace(':', '-')
filename = f'memories/{day}_{time}.mp4' if type == 'VIDEO' else f'memories/{day}_{time}.jpg'
And then write it to file:
with open(filename, 'wb') as f:
f.write(file.content)
I've made a bot to download all memories.
You can download it here
It doesn't require any additional installation, just place the memories_history.json file in the same directory and run it. It skips the files that have already been downloaded.
Short answer
Download a desktop application that automates this process.
Visit downloadmysnapchatmemories.com to download the app. You can watch this tutorial guiding you through the entire process.
In short, the app reads the memories_history.json file provided by Snapchat and downloads each of the memories to your computer.
App source code
Long answer (How the app described above works)
We can iterate over each of the memories within the memories_history.json file found in your data download from Snapchat.
For each memory, we make a POST request to the URL stored as the memories Download Link. The response will be a URL to the file itself.
Then, we can make a GET request to the returned URL to retrieve the file.
Example
Here is a simplified example of fetching and downloading a single memory using NodeJS:
Let's say we have the following memory stored in fakeMemory.json:
{
"Date": "2022-01-26 12:00:00 UTC",
"Media Type": "Image",
"Download Link": "https://app.snapchat.com/..."
}
We can do the following:
// import required libraries
const fetch = require('node-fetch'); // Needed for making fetch requests
const fs = require('fs'); // Needed for writing to filesystem
const memory = JSON.parse(fs.readFileSync('fakeMemory.json'));
const response = await fetch(memory['Download Link'], { method: 'POST' });
const url = await response.text(); // returns URL to file
// We can now use the `url` to download the file.
const download = await fetch(url, { method: 'GET' });
const fileName = 'memory.jpg'; // file name we want this saved as
const fileData = download.body; // contents of the file
// Write the contents of the file to this computer using Node's file system
const fileStream = fs.createWriteStream(fileName);
fileData.pipe(fileStream);
fileStream.on('finish', () => {
console.log('memory successfully downloaded as memory.jpg');
});
So I am attempting to learn how to use the Google Sheets API with Node.js. In order to get an understanding, I followed along with the node.js quick start guide supplied by Google. I attempted to run it, nearly line for line a copy of the guide, just without documentation. I wind up encountering this: cmd console output that definitely didn't work.
Just in case anyone wants to see if I am not matching the guide, which is entirely possible since I am fairly new to this, here is a link to the Google page and my code.
https://developers.google.com/sheets/api/quickstart/nodejs
var fs = require('fs');
var readline = require('readline');
var google = require('googleapis');
var googleAuth = require('google-auth-library');
var SCOPES = ['https://www.googleapis.com/auth/spreadsheets.readonly'];
var TOKEN_DIR = (process.env.HOME || process.env.HOMEPATH ||
process.env.USERPROFILE) + '/.credentials/';
var TOKEN_PATH = TOKEN_DIR + 'sheets.googleapis.com-nodejs-quickstart.json';
fs.readFile('client_secret.json', function processClientSecrets(err, content) {
if (err) {
console.log('Error loading client secret file: ' + err);
}
authorize(JSON.parse(content), listMajors);
});
I have tried placing the JSON file in each and every part of the directory, but it still won't see it. I've been pulling hairs all day, and a poke in the right direction would be immensely appreciated.
From your command output:
Error loading client secret file
So your if (err) line is being triggered. But since you don't throw the error, the script continues anyway (which is dangerous in general).
SyntaxError: Unexpected token u in JSON at position 0
This means that the data you are passing to JSON.parse() is undefined. It is not a valid JSON string.
You could use load-json-file (or the thing it uses, parse-json) to get more helpful error messages. But it's caused by the fact that your content variable has nothing since the client_secret.json you tried to read could not be found.
As for why the file could not be found, there could be a typo in either the script or the filename you saved the JSON in. Or it may have to do with the current working directory. You may want to use something like this to ensure you end up with the same path regardless of the current working directory.
path.join(__dirname, 'client_secret.json')
Resources
path.join()
__dirname
For my program I have to include huge index and data files in the program bundle. Because it is an universal app, I have included these files in a folder named "Data" within the "Shared" Project.
Now I try to read:
StorageFile file = await ApplicationData.Current.LocalFolder.GetFileAsync("Data/"+fileName);
Stream stream = (await file.OpenReadAsync()).AsStreamForRead();
BinaryReader reader = new BinaryReader(stream);
Windows.Storage.FileProperties.BasicProperties x = await file.GetBasicPropertiesAsync();
I get a System.ArgumentException "mscorlib.ni.dll" at the first line. What's wrong?
If somebody can help me and I get the file, I want to find the filesize. I hope, I can find this Information within the FileProperties (last line of code).
Then I want to set a FilePointer within that file and to read a defined number of binary data. Can I do that without reading the whole file in memory?
What you are trying to do is to access LocalFolder, which is not the same as Package.Current.InstalledLocation.
If you want to access files that are included with your package, you can do for example like this - by using URI schemes:
StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(new Uri(#"ms-appx:///Data/"+fileName));
using (Stream stream = (await file.OpenReadAsync()).AsStreamForRead())
using (BinaryReader reader = new BinaryReader(stream))
{
Windows.Storage.FileProperties.BasicProperties x = await file.GetBasicPropertiesAsync();
}
or like this - by getting file from your Package, which you can access as StorageFolder - also pay attention here to use correct slashes (as it may be a source of exception):
StorageFile file = await Windows.ApplicationModel.Package.Current.InstalledLocation.GetFileAsync(#"Data\" + fileName);
using (Stream stream = (await file.OpenReadAsync()).AsStreamForRead())
using (BinaryReader reader = new BinaryReader(stream))
{
Windows.Storage.FileProperties.BasicProperties x = await file.GetBasicPropertiesAsync();
}
Note also that I've put your Stream and BinaryReader into using, as they are IDisposable and it's suitable to release those resources as they are no longer needed.
Note also that when your shared project has a name MySharedProject, you will have to modify the Path of above URI:
StorageFile file = await StorageFile.GetFileFromApplicationUriAsync(new Uri(#"ms-appx:///MySharedProject/Data/"+fileName));
or obtain the suitable StorageFolder:
StorageFile file = await Windows.ApplicationModel.Package.Current.InstalledLocation.GetFileAsync(#"MySharedProject\Data\" + fileName);
One remark after discussion:
When you add a file with .txt extension to your project, its Build Action by default is set to Content. But when you add file with .idx extension, as I've checked, its Build Action is set to None by default. To include those files in your package, change them to Content.
After Romasz has brought me to the right path, I can see the problem is quite different there.
My data files were involved in the correct place in the project, but Visual Studio does not bind all what you want.
In my project I need large data files to be firmly integrated into the program. These are between 13 KB and 41 MB in size and have file types .idx and .dat. These names are part of the problem.
What I know so far:
I may add .txt files with seemingly arbitrary size. Tested with 41 MB - no problem.
The same file with changed file type .idx is not added. The file is simply not included in the compiled project. No error message.
Of course I can rename the .idx files to another file type (tested with .id), but I want to know why idx files are treated differently. And why I got no error indication.