Export Contact (Google_csv format) from API with a script - google-apps-script

here's my problem, i hope someone can help me :(
Problem Description:
I make a PHPscript to backup every day Google/Gmail contacts from a account.
I get a atom file but i want a google_csv file.
Steps to Reproduce:
After authentication (OAuth v3) API i get contactList with POST https://www.google.com/m8/feeds/contacts/default/full?access_token=XXXXXXXXX.
So i get a ATOM file but I want extract a google csv (with group....)
I try with
- contacts/default/full?alt=csv&access_token=XXXXXXXXX (KO)
- contacts/default/full?alt=google_csv&access_token=XXXXXXXXX (KO)
- contacts/default/full?out=google_csv&access_token=XXXXXXXXX (KO)
- https://www.google.com/s2/u/0/data/exportquery?ac=false&cr=true&ct=true&ev=true&f=g2&gp=true&hl=fr&id=personal&max=-1&nge=true&out=google_csv&sf=display&sgids=6%2Cd%2Ce%2Cf%2C17&st=0&type=4&tok=XXXXXXXXX (KO)
...
Is there any suggestion to convert atom file to google_csv file ? OR to directly get a google_csv with a exportquery from google ?
Thanks

Based from this SO ticket, you need to have a csv file that is a table, with well defined columns and the same number of columns in each row.
The script below will show you how to get the google contacts in .csv file. You need to get the data in form of string and use the scripts for printing them for spread sheet format.
<?php
$data = $_POST['email'];
$arry[] = explode(',', $data);
foreach ($arry as $row)
{
$arlength = count($row);
for ($i = 0; $i < $arlength; $i++)
{
$farry[] = explode(',', $row[$i]);
}
}
header("Content-type: text/csv");
header("Content-Disposition: attachment; filename=file.csv");
header("Pragma: no-cache");
header("Expires: 0");
$file = fopen('php://output', 'w');
fputcsv($file, array('Description'));
foreach ($farry as $row)
{
fputcsv($file, $row);
}
exit();
Check this tutorial.

Related

phpmyadmin import csv file. How to skip error line

I´m trying to import about 3gb csv files to phpmyadmin. Some of them contains more terminated chars and then importing stops because of wrong fields.
I have two colums which i want to fill. Im using : as terminanting char but when there is more of them in line it just stops. I cannot manage csv files they are too big. I want to skip error lines or look for other solutions. How can i do this ?
csv files looks like this
ahoj123:dublin
cat:::dog
pes::lolko
As a solution to your problem, I have written a simple PHP file that will "fix" your file for you ..
It will open "test.csv" with contents of:
ahoj123:dublin
cat:::dog
pes::lolko
And convert it to the following and save to "fixed_test.csv"
ahoj123:dublin
cat:dog
pes:lolko
Bear in mind that I am basing this on your example, so I am letting $last keep it's EOL character since there is no reason to remove or edit it.
PHP file:
<?php
$filename = "test.csv";
$handle = fopen($filename, "r+") or die("Could not open $filename" . PHP_EOL);
$keep = '';
while(!feof($handle)) {
$line = fgets($handle);
$elements = explode(':', $line);
$first = $elements[0];
$key = (count($elements) - 1);
$last = $elements[$key];
$keep .= "$first:$last";
}
fclose($handle);
$new_filename = "better_test.csv";
$new_handle = fopen("fixed_test.csv", "w") or die("Could not open $new_filename" . PHP_EOL);
fwrite($new_handle, $keep);
fclose($new_handle);

How to import data mocked into an Excel file into a MySql database table?

I am not so into database and I have the following situation: I am working using MySQL and I have to import some data moacked into a Microsoft Excel file.
In this file the first line cells represents a table fields (each cell is a field), the below rows cells contains the value related to these fields.
At the beginning I had thought to develop a Java program that access to this Excel file, parse it and populate my DB. But this solution is unsustainable because I had a lot of Excel files (each file contains the mocked data for a specific table).
Can I directly use an Excel file (or the converted version in .csv) to populate a MySql table? Exist an easy way to do it?
If so what are the precautions to be taken into account?
Mysql Query (if you have admin rights)
SELECT * INTO OUTFILE "/mydata.csv"
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY "\n"
FROM test;
Or an easy PHP script
$result = $db_con->query('SELECT * FROM `some_table`');
if (!$result) die('Couldn\'t fetch records');
$num_fields = mysql_num_fields($result);
$headers = array();
for ($i = 0; $i < $num_fields; $i++) {
$headers[] = mysql_field_name($result , $i);
}
$fp = fopen('php://output', 'w');
if ($fp && $result) {
header('Content-Type: text/csv');
header('Content-Disposition: attachment; filename="export.csv"');
header('Pragma: no-cache');
header('Expires: 0');
fputcsv($fp, $headers);
while ($row = $result->fetch_array(MYSQLI_NUM)) {
fputcsv($fp, array_values($row));
}
die;
}

HTML::TableExtract - Script working with a html file but not with the corresponding URL

I am using the following script, which takes as input a HTML page obtained from this url :
http://omim.org/entry/600185
use HTML::TableExtract;
my $doc = 'OMIM_2.htm';
my $headers = [ 'Phenotype', 'Inheritance' ];
my $table_extract = HTML::TableExtract->new(headers => $headers);
$table_extract->parse_file($doc);
my ($table) = $table_extract->tables;
for my $row ($table->rows) {
foreach $info (#$row) {
if ($info =~ m/(\S+)/) {
$info =~ s/^\s+(.+)\s+$/$1/;
print $info."\t";
}
}
print "\n";
}
It does what I want, thus extracting the "Phenotype" and "Inheritance" fields from the table.
Nevertheless, I would like to obtain this information directly from the URL, and I tried to modify the script :
use HTML::TableExtract;
my $doc = 'http://omim.org/entry/600185';
my $headers = [ 'Phenotype', 'Inheritance' ];
my $table_extract = HTML::TableExtract->new(headers => $headers);
$table_extract->parse($doc);
my ($table) = $table_extract->tables;
for my $row ($table->rows) {
foreach $info (#$row) {
if ($info =~ m/(\S+)/) {
$info =~ s/^\s+(.+)\s+$/$1/;
print $info."\t";
}
}
print "\n";
}
I certainly do a mistake because I obtained the following error :
Can't call method "rows" on an undefined value at Test_OMIM.perl line 11.
More intriguing, I also obtained this error if the file was called "OMIM_2.html" and no "OMIM_2.htm". Logical ?
Thanks by advance for your help.
You are giving HTML::TableExtract a URL when it wants to be given HTML. In order to download the HTML you would do this
use strict;
use warnings qw/ all FATAL /;
use LWP::UserAgent;
my $ua = LWP::UserAgent->new;
my $response = $ua->get('http://omim.org/entry/600185');
my $html = $response->content;
print $html;
output
Your client was identified as a crawler.
Please note:
- The robots.txt files disallows the crawling of the site except to Google, Bing
and Yahoo crawlers.
- The raw data is available via FTP on the http://omim.org/downloads link on the site.
- We have an API you can learn about at http://omim.org/api and http://omim.org/help/api,
this provides access to the data in XML, JSON, Python and Ruby formats.
- You should feel free to contact us at http://omim.org/contact to figure the best
approach to getting the data you need.
Please note that you might have difficulties doing this, as omim.org does not want you to download the HTML automatically, but wants you to use the raw-data or API. This is their robots.txt document, which all automated software is supposed to read and comply with voluntarily

populate Html table with remote files

I want to create a HTML page that will have a table that will populate itself with info from 2 .txt files that are on a remote Linux Server.
or populate a html page on that remote server with the same info from those 2 .txt files and then access that html page using apache's webserver.
something as basic as possible would be nice but I can understand if it's complicated to do with html
honestly, any help at all would be nice.
I would personally do it in PHP. You can read the file and echo it into a table. You can then use the lines of the file for anything you want. I put comments in explaining each step. All you have to do is change $filepath to point at your text file:
Edited: Edited the code to add constraints mentioned by OG poster in comments. There is probably a more optimized way of performing your task, but this works and should introduce some new concepts to you if you are new to PHP
<?php
$filepath = 'files/the_file.txt';
if (file_exists($filepath)) {
$file = fopen($filepath, 'r');
echo '<table border=1>';
while (!feof($file)) {
$line = fgets($file);
$first_char = $line[0];
if ($first_char != '*' && $first_char != '^' && trim($line) != '') {
if (strstr($line, '|')) {
$split = explode('|', $line);
echo '<tr>';
foreach($split as $line) {
echo '<td>'.$line.'</td>';
}
echo '</tr>';
} else {
echo '<tr><td>'.$line.'</td></tr>';
}
}
}
echo '</table>';
} else {
echo 'the file does not exist';
}
?>
I'll do my best to explain it line by line instead of flooding the scrip with comments:
set your file path
If the file exists, continue on. If not, throw the error located at the bottom of the script
open the file
create the table ('<table>')
while the text file is being read, do a series of things: First, get the line. If the first character of the line is a * or ^, or when the line is trimmed there are no characters, skip it completely. Otherwise, continue on
if the line contains a | character, split (explode) the line at all of the | characters. Use this array of split up content and for each piece of content, echo out a new column in the existing row with the current content. Otherwise, there is not | found and you can just echo the line into a row normally
once you are finished up, end the table ('</table>')
Edit #2: The original solution I posted:
<?php
$filepath = '/var/www/files/the_file.txt';
if (file_exists($filepath)) {
$file = fopen($filepath, 'r');
echo '<table border=1>';
while (!feof($file)) {
$line = fgets($file);
echo '<tr><td>'.$line.'</td></tr>';
}
echo '</table>';
} else {
echo 'the file does not exist';
}
?>
HTML can't do anything, HTML is a presentation format.
PHP, Javascript, BASH could do the job in very different ways :
PHP : the server calls the 2 remote files and output the assembled html file into a webpage, then send it to the client
Javascript : the page itself calls the 2 files and add them in itself.
Bash + CURL : a BASH (or PHP, Python...) script creates a .html file containing the data of the 2 files.
One of these might help you, if you can precreate the HTML rather than doing it dynamically. These scripts take CSV as input and output an HTML table:
http://stromberg.dnsalias.org/svn/to-table/
http://stromberg.dnsalias.org/svn/to-table2/

Number of times a file (e.g., PDF) was accessed on a server

I have a small website with several PDFs free for download. I use StatCounter to observe the number of page loads. It also shows me the number of my PDF downloads, but it considers only those downloads where a user clicks on the link from my website. But what's with the "external" access to the PDFs (e.g., directly from Google search)? How I can count those? Is there any possibility to use a tool such as StatCounter?
Thanks.
.htaccess (redirect *.pdf requests to download.php):
RewriteEngine On
RewriteRule \.pdf$ /download.php
download.php:
<?php
$url = $_SERVER['REQUEST_URI'];
if (!preg_match('/([a-z0-9_-]+)\.pdf$/', $url, $r) || !file_exists($r[1] . '.pdf')) {
header('HTTP/1.0 404 Not Found');
echo "File not found.";
exit(0);
}
$filename = $r[1] . '.pdf';
// [do you statistics here]
header('Content-type: application/pdf');
header("Content-Disposition: attachment; filename=\"$filename\"");
readfile($filename);
?>
You can use a log analyzer to check how many times a file was accessed. Ask you hosting provider if they provide access to access logs and a log analyzer software.
in php, it would be something like (untested):
$db = mysql_connect(...);
$file = $_GET['file'];
$allowed_files = {...}; // or check in database
if (in_array($file, $allowed_files) && file_exists($file)) {
header('Content-Description: File Transfer');
header('Content-Type: application/pdf');
header('Content-Disposition: attachment; filename='.basename($file));
header('Content-Transfer-Encoding: binary');
header('Expires: 0');
header('Cache-Control: must-revalidate');
header('Pragma: public');
header('Content-Length: ' . filesize($file));
ob_clean();
flush();
mysql_query('UPDATE files SET count = count + 1 WHERE file="' . $file . '"')
readfile($file);
exit;
} else {
/* issue a 404, or redirect to a not-found page */
}
You would have to create a way to capture the request from the server.
If you are using php, probably the best would be to use mod_rewrite.
If you are using .net, an HttpHandler.
You must handle the request, call statcounter and then send the pdf content to the user.