How to get the BoxFileVersion object of the previous version by using the version number and fileID using Box-SDK? - box-api

I can get the current version of the file using the code below?
BoxFile file = new BoxFile(api,fileId);
BoxFile.Info info = file.getInfo("version_number","file_version");
info.getVersionNumber(); // current version No.
Now I wanted to fetch the BoxFileVersion Object for the Given Version Number, In below code i tried to get the previous version of the file, but i am unable to get the VERSION NUMBER for the specific versions
Collection<BoxFileVersion> versions = file.getVersions(); // Fetching the Previous Version of the Files
if(versions.size() != 0){ // If there is no Previous Versions
for(BoxFileVersion bfv : versions){
if(bfv.getTrashedAt() == null){
bfv.promote();
boxFileVersion.delete();
System.out.println("Deleted Version ID : "+boxFileVersion.getVersionID());
break;
}
}
}
else{
file.delete(); // delete the file if no previous version exist
}

So I tested out your code without deleting previous versions and it seems to work.
BoxFile file = new BoxFile(userApi, "xxxxxx");
System.out.println("file current version: " + file.getInfo().getVersion().getVersionID());
Collection<BoxFileVersion> versions = file.getVersions(); // Fetching the Previous Version of the Files
int versionIndex = versions.size();
if (versions.size() != 0) { // If there is no Previous Versions
for (BoxFileVersion bfv : versions) {
if (versionIndex == versions.size()) // the first one is the previous version
{
bfv.promote();
bfv.delete();
System.out.println("Deleted Version ID : "+ bfv.getVersionID());
}
System.out.println("bfv: [" + versionIndex-- + "] " + bfv.getVersionID() + " " + bfv.getCreatedAt());
}
}
and there doesn't seem to be a version number but version ids. So I guess the version #'s are just the position in the version array.
and here's the output:
file current version: xxxx42182218
Deleted Version ID : xxxx42064367
bfv: [4] xxxx42064367 Tue May 09 16:43:54 PDT 2017
bfv: [3] xxxx32054815 Tue May 09 16:28:50 PDT 2017
bfv: [2] xxxx28578550 Tue May 09 16:19:47 PDT 2017
bfv: [1] xxxx28578201 Tue May 09 16:19:41 PDT 2017
file current version: xxxx47266430

Related

How to check if sheet A and B are literally the same sheet?

So I'm trying to optimize my sheets onEdit function by initially checking if the modified cells are even in the right sheet like this:
var spreadsheet1 = ss.getSheetByName("ONE")
var spreadsheet2 = ss.getSheetByName("TWO")
function onEdit(e) {
let range = e.range
Logger.log("Starting onEdit")
Logger.log("Range: " + range.getSheet().getSheetId().toString())
Logger.log("spreadsheet1: " + spreadsheet1.getSheetId().toString())
Logger.log("spreadsheet2: " + spreadsheet1.getSheetId().toString())
Logger.log("Is spreadsheet1 :" + range.getSheet().getSheetId() == spreadsheet1.getSheetId())
Logger.log("Is spreadsheet2:" + range.getSheet().getSheetId() == spreadsheet2.getSheetId())
if (range.getSheet().getSheetId() == spreadsheet1.getSheetId() ||
range.getSheet().getSheetId() == spreadsheet2.getSheetId())
{
do stuff
}
The logs look like this:
Oct 12, 2021, 2:12:54 PM Info Starting onEdit
Oct 12, 2021, 2:12:54 PM Info Range: 78085800
Oct 12, 2021, 2:12:54 PM Info spreadsheet1: 78085800
Oct 12, 2021, 2:12:54 PM Info spreadsheet2: 514624715
Oct 12, 2021, 2:12:54 PM Info Is spreadhseet1: false
Oct 12, 2021, 2:12:54 PM Info Is spreadsheet2: false
One of the two conditions should return true and the if should be evaluated as true but for some reason it doesn't work!
The problem is in your loggers. Currently, it's adding two strings together and comparing them to a third. So what's being compared looks something like this:
"Is spreadsheet1 :78085800" == "78085800"
This is obviously false, so all that's needed is to use parentheses to change the order of operations. Try updating the loggers to the following:
Logger.log("Is spreadsheet1: " + (range.getSheet().getSheetId() == spreadsheet1.getSheetId()))
Logger.log("Is spreadsheet2: " + (range.getSheet().getSheetId() == spreadsheet2.getSheetId()))

Get Date from MySQL

I'm storing DATE on MySQL Workbench with value '2018-06-13' and '2018-07-07'. When I get it from nodejs server, I got some confusing:
con.query(stm, function(err, results){
if (err) throw err;
for(var i = 0; i < results.length; i++){
var d = new Date(results[i].date);
console.log(d);
console.log(d.getDate())
}
console.log(results);
Here what I get:
2018-06-12T17:00:00.000Z
13
2018-07-06T17:00:00.000Z
7
[ RowDataPacket {
date: 2018-06-12T17:00:00.000Z},
RowDataPacket {
date: 2018-07-06T17:00:00.000Z } ]
Anyone know why date on server is 13 and 7, but it was subtracted by 1 when received on server?
The answer here is simple, what the server returns is UTC. What Date(results[i].date) returns is UTC + or - the time difference between UTC and your current time zone.
I ran
Date('2018-06-12T17:00:00.000Z')
and the Output was
"Tue Jul 17 2018 16:10:44 GMT+0530 (India Standard Time)"
which is the time in IST i.e UTC+ 5hrs 30mins

VIM Syntastic error for HTML5

I'm a newcomer to VIM and installed GVIM successfully on my Windows 8.1 laptop. I uploaded the Syntastics plugin using Pathogen and, as I'm planning to write an ionic project, I also copied the tidy5 for html5 binary (from http://www.htacg.org/tidy-html5/) and referred to the binary in my VIMRC as follows:
" SYNTASTIC
set statusline+=%#warningmsg#
set statusline+=%{SyntasticStatuslineFlag()}
set statusline+=%*
" automatically load errors in the location list:
let g:syntastic_always_populate_loc_list = 1
let g:syntastic_auto_loc_list = 1
" check on errors when a file is loaded:
let g:syntastic_check_on_open = 1
" check on errors when saving the file:
let g:syntastic_check_on_wq = 0
let g:syntastic_html_tidy_execute = 'C:\Program Files (x86)\tidy-5.1.25-win32\bin\tidy'
let g:syntastic_shell='C:\Windows\System32\cmd'
" check debugging messages in vim with :mes:
let g:syntastic_debug = 1
let g:syntastic_mode_map = {"mode": "active",
\"active_filetypes" : ["html","javascript","json"],
\"passive_filetypes" : ["html", "javascript","json"] }
When I run
:SyntasticInfo
I get the following output
Syntastic version: 3.7.0-76 (Vim 704, Windows)
Info for filetype: html
Global mode: active
Filetype html is active
The current file will be checked automatically
Available checkers: -
Currently enabled checkers: -
For some reason, my default checkers aren't loading, and when I run
echo syntastic#util#system('echo %PATH%')
I get a E484 error
Error detected while processing function syntastic#util#system:
line 9:
E484: Can't open file C:\Users\Dirk\AppData\Local\Temp\VIo8366.tmp
-1
I suspect there are multiple issues here, so I assume the first one I need to solve is the E484 error. Any help appreciated.

How to convert data from a custom format to CSV?

I have file that the content of file is as bellow, I have only output two records here but there is around 1000 record in single file:
Record type : GR
address : 62.5.196
ID : 1926089329
time : Sun Aug 10 09:53:47 2014
Time zone : + 16200 seconds
address [1] : 61.5.196
PN ID : 412 1
---------- Container #1 (start) -------
inID : 101
---------- Container #1 (end) -------
timerecorded: Sun Aug 10 09:51:47 2014
Uplink data volume : 502838
Downlink data volume : 3133869
Change condition : Record closed
--------------------------------------------------------------------
Record type : GR
address : 61.5.196
ID : 1926089327
time : Sun Aug 10 09:53:47 2014
Time zone : + 16200 seconds
address [1] : 61.5.196
PN ID : 412 1
---------- Container #1 (start) -------
intID : 100
---------- Container #1 (end) -------
timerecorded: Sun Aug 10 09:55:47 2014
Uplink data volume : 502838
Downlink data volume : 3133869
Change condition : Record closed
--------------------------------------------------------------------
Record type : GR
address : 63.5.196
ID : 1926089328
time : Sun Aug 10 09:53:47 2014
Time zone : + 16200 seconds
address [1] : 61.5.196
PN ID : 412 1
---------- Container #1 (start) -------
intID : 100
---------- Container #1 (end) -------
timerecorded: Sun Aug 10 09:55:47 2014
Uplink data volume : 502838
Downlink data volume : 3133869
Change condition : Record closed
my Goal is to convert this to CSV or txt file like bellow
Record type| address |ID | time | Time zone| address [1] | PN ID
GR |61.5.196 |1926089329 |Sun Aug 10 09:53:47 2014 |+ 16200 seconds |61.5.196 |412 1
any guide would be great on how you think would be best way to start this, the sample that I provided I think will give the clear idea but in words I would want to read the header of each record once and put their data under the out put header.
thanks for your time and help or suggestion
What you're doing is creating an Extract/Transform script (the ET part of an ETL). I don't know which language you're intending to use, but essentially any language can be used. Personally, unless this is a massive file, I'd recommend Python as it's easy to grok and easy to write with the included csv module.
First, you need to understand the format thoroughly.
How are records separated?
How are fields separated?
Are there any fields that are optional?
If so, are the optional fields important, or do they need to be discarded?
Unfortunately, this is all headwork: there's no magical code solution to make this easier. Then, once you have figured out the format, you'll want to start writing code. This is essentially a series of data transformations:
Read the file.
Split it into records.
For each record, transform the fields into an appropriate data structure.
Serialize the data structure into the CSV.
If your file is larger than memory, this can become more complicated; instead of reading and then splitting, for example, you may want to read the file sequentially and create a Record object each time the record delimiter is detected. If your file is even larger, you might want to use a language with better multithreading capabilities to handle the transformation in parallel; but those are more advanced than it sounds like you need to go at the moment.
This is a simple PHP script that will read a text file containing your data and write a csv file with the results. If you are on a system which has command line PHP installed, just save it to a file in some directory, copy your data file next to it renaming it to "your_data_file.txt" and call "php whatever_you_named_the_script.php" on the command line from that directory.
<?php
$text = file_get_contents("your_data_file.txt");
$matches;
preg_match_all("/Record type[\s\v]*:[\s\v]*(.+?)address[\s\v]*:[\s\v]*(.+?)ID[\s\v]*:[\s\v]*(.+?)time[\s\v]*:[\s\v]*(.+?)Time zone[\s\v]*:[\s\v]*(.+?)address \[1\][\s\v]*:[\s\v]*(.+?)PN ID[\s\v]*:[\s\v]*(.+?)/su", $text, $matches, PREG_SET_ORDER);
$csv_file = fopen("your_csv_file.csv", "w");
if($csv_file) {
if(fputcsv($csv_file, array("Record type","address","ID","time","Time zone","address [1]","PN ID"), "|") === FALSE) {
echo "could not write headers to csv file\n";
}
foreach($matches as $match) {
$clean_values = array();
for($i=1;$i<8;$i++) {
$clean_values[] = trim($match[$i]);
}
if(fputcsv($csv_file, $clean_values, "|") === FALSE) {
echo "could not write data to csv file\n";
}
}
fclose($csv_file);
} else {
die("could not open csv file\n");
}
This script assumes that your data records are always formatted similar to the examples you have posted and that all values are always present. If the data file may have exceptions to those rules, the script probably has to be adapted accordingly. But it should give you an idea of how this can be done.
Update
Adapted the script to deal with the full format provided in the updated question. The regular expression now matches single data lines (extracting their values) as well as the record separator made up of dashes. The loop has changed a bit and does now fill up a buffer array field by field until a record separator is encountered.
<?php
$text = file_get_contents("your_data_file.txt");
// this will match whole lines
// only if they either start with an alpha-num character
// or are completely made of dashes (record separator)
// it also extracts the values of data lines one by one
$regExp = '/(^\s*[a-zA-Z0-9][^:]*:(.*)$|^-+$)/m';
$matches;
preg_match_all($regExp, $text, $matches, PREG_SET_ORDER);
$csv_file = fopen("your_csv_file.csv", "w");
if($csv_file) {
// in case the number or order of fields changes, adapt this array as well
$column_headers = array(
"Record type",
"address",
"ID",
"time",
"Time zone",
"address [1]",
"PN ID",
"inID",
"timerecorded",
"Uplink data volume",
"Downlink data volume",
"Change condition"
);
if(fputcsv($csv_file, $column_headers, "|") === FALSE) {
echo "could not write headers to csv file\n";
}
$clean_values = array();
foreach($matches as $match) {
// first entry will contain the whole line
// remove surrounding whitespace
$whole_line = trim($match[0]);
if(strpos($whole_line, '-') !== 0) {
// this match starts with something else than -
// so it must be a data field, store the extracted value
$clean_values[] = trim($match[2]);
} else {
// this match is a record separator, write csv line and reset buffer
if(fputcsv($csv_file, $clean_values, "|") === FALSE) {
echo "could not write data to csv file\n";
}
$clean_values = array();
}
}
if(!empty($clean_values)) {
// there was no record separator at the end of the file
// write the last entry that is still in the buffer
if(fputcsv($csv_file, $clean_values, "|") === FALSE) {
echo "could not write data to csv file\n";
}
}
fclose($csv_file);
} else {
die("could not open csv file\n");
}
Doing the data extraction using regular expressions is one possible method mostly useful for simple data formats with a clear structure and no surprises. As syrion pointed out in his answer, things can get much more complicated. In that case you might need to write a more sophisticated script than this one.

Run phpsh with specific php script

By default phpsh uses system's default PHP, my system it's /usr/local/bin/php, which is php 5.2. How can I run phpsh with my custom php path — /srv/bin/php?
It's not supported by default, but you can quickly add.
It by applying this patch to your phpsh.py in the src folder:
--- phpsh.py 2011-05-13 18:16:32.000000000 -0400
+++ phpsh.py 2013-12-05 14:50:11.906673382 -0500
## -253,6 +253,7 ##
def __init__(self):
self.config = ConfigParser.RawConfigParser({
"UndefinedFunctionCheck": "yes",
+ "PathToBinary" : None,
"Xdebug" : None,
"DebugClient" : "emacs",
"ClientTimeout" : 60,
## -388,6 +389,8 ##
except Exception, msg:
self.print_error("Failed to load config file, using default "\
"settings: " + str(msg))
+ if self.config.get_option("General", "PathToBinary"):
+ os.environ['PATH'] = self.config.get_option("General", "PathToBinary") + ':' + os.environ['PATH']
if self.with_xdebug:
xdebug = self.config.get_option("Debugging", "Xdebug")
if xdebug and xdebug != "yes":
If you want to modify an already installed version, find your pythons site-pages folder and apply the patch on both init.py and phpsh.py in that folder.
This will add a new configuration variable so in phpsh/config (in /etc/phpsh/config if installed as root, or ~/.phpsh/config if user). In there you can specify the path to your php binary
PathToBinary: /srv/bin
This is just the path where the binary should be found, not the path binary itself, i.e. /srv/bin/php will not work.