I'm trying to convert our company's CVS to Mercurial but so far without success. Remote file history should be retained.
I tried the conversion tools described on RepositoryConversion - Mercurial
Convert extension
e:\Hg>hg convert -s CVS H:\Cvs
assuming destination cvs-hg
initializing destination cvs-hg repository
abort: CVS: invalid source repository type
What's going on!?
cvs2hg
Looks very promising, but errors out at the last moment.
python cvs2hg -v --encoding=UTF8 --hgrepos=e:/Hg h:/Cvs
Result after 10 minutes and 9 miles of text:
----- pass 14 (SortSymbolOpeningsClosingsPass) -----
Sorting symbolic name source revisions...
Done
Time for pass14 (SortSymbolOpeningsClosingsPass): 0.150 seconds.
Deleting cvs2svn-tmp\statistics-13.pck
Deleting cvs2svn-tmp\symbolic-names.txt
----- pass 15 (IndexSymbolsPass) -----
Determining offsets for all symbolic names...
VERSION_0_22
VERSION_0_18
VERSION_0_11
VERSION_0_9
VERSION_0_5
VERSION_0_1
Done.
Time for pass15 (IndexSymbolsPass): 0.130 seconds.
Deleting cvs2svn-tmp\statistics-14.pck
----- pass 16 (OutputPass) -----
Traceback (most recent call last):
File "cvs2hg", line 91, in <module>
hg_main(os.path.basename(sys.argv[0]), sys.argv[1:])
File "c:\Portable progs\cvs2svn-19b322d42b1f\cvs2svn_lib\main.py", line 135, in hg_main
main(progname, run_options, pass_manager)
File "c:\Portable progs\cvs2svn-19b322d42b1f\cvs2svn_lib\main.py", line 96, in main
pass_manager.run(run_options)
File "c:\Portable progs\cvs2svn-19b322d42b1f\cvs2svn_lib\pass_manager.py", line 181, in run
the_pass.run(run_options, stats_keeper)
File "c:\Portable progs\cvs2svn-19b322d42b1f\cvs2svn_lib\passes.py", line 1771, in run
svn_commit.output(Ctx().output_option)
File "c:\Portable progs\cvs2svn-19b322d42b1f\cvs2svn_lib\svn_commit.py", line 238, in output
output_option.process_primary_commit(self)
File "c:\Portable progs\cvs2svn-19b322d42b1f\cvs2svn_lib\hg_output_option.py", line 291, in process_primary_commit
svn_commit, [parent1, parent2], filenames, getfilectx, lod)
File "c:\Portable progs\cvs2svn-19b322d42b1f\cvs2svn_lib\hg_output_option.py", line 715, in _commit_primary
return self._commit(svn_commit, parents, filenames, getfilectx, lod)
File "c:\Portable progs\cvs2svn-19b322d42b1f\cvs2svn_lib\hg_output_option.py", line 733, in _commit
return self._commit_memctx(mctx)
File "c:\Portable progs\cvs2svn-19b322d42b1f\cvs2svn_lib\hg_output_option.py", line 739, in _commit_memctx
node = self.repo.commitctx(mctx)
File "mercurial\localrepo.pyo", line 63, in wrapper
File "mercurial\localrepo.pyo", line 1399, in commitctx
File "mercurial\localrepo.pyo", line 1193, in _filecommit
File "mercurial\filelog.pyo", line 76, in cmp
AttributeError: 'bool' object has no attribute 'startswith'
Tailor, fromcvs
Initially blocked by link rot, but after Googling a lot, turn out to be Linux-only tools (?). I do have cygwin but I never had any good experiences compiling source distributions.
hg-cvs-import
Link rot too, and can't find anything. Moreover about the last three tools, I read on Would you migrate from cvs to svn or directly to git or hg?: "The Tailor extension, hg-cvs-import, fromcvs seems to be old code and aren't maintained any more."
I also tried the trick on Convert cvs to mercurial, even though it probably only retains local file history, but got the same result as in my first try.
Any other tools I somehow missed? Maybe a user-friendly application for Windows?
I succeeded with cvs2hg and Mercurial 2.0 source (not a newer version)
Download the Mercurial 2.0 source,
Copy the Mercurialfolder (found inside the Mercurial-2.0 folder) to the cvs2svn folder,
Go into cvs2svn/Mercurial/pure,
Copy the contents to cvs2svn/Mercurial,
Execute python cvs2hg
Related
So I'm working with ArcGIS pro and trying to build / train a deep learning model and I keep running into the same wall.
The error I receive when running the "Train Deep Learning Model" is Error 003610 The specified output folder contains files. Use an empty folder.
The issue is the folder is entirely new and created for this model so I'm a little confused. additionally it has this text as further information, but I'm not sure what it means.
"Traceback (most recent call last):
File "c:\program files\arcgis\pro\Resources\ArcToolbox\toolboxes\Image Analyst Tools.tbx\TrainDeepLearningModel.tool\tool.script.execute.py", line 390, in
execute()
File "c:\program files\arcgis\pro\Resources\ArcToolbox\toolboxes\Image Analyst Tools.tbx\TrainDeepLearningModel.tool\tool.script.execute.py", line 334, in execute
training_model_object.fit(
File "C:\Users\bmmoh\AppData\Local\ESRI\conda\envs\deeplearning\lib\site-packages\arcgis\learn\models_arcgis_model.py", line 902, in fit
lr = self.lr_find(allow_plot=False)
File "C:\Users\bmmoh\AppData\Local\ESRI\conda\envs\deeplearning\lib\site-packages\arcgis\learn\models_arcgis_model.py", line 721, in lr_find
raise e
File "C:\Users\bmmoh\AppData\Local\ESRI\conda\envs\deeplearning\lib\site-packages\arcgis\learn\models_arcgis_model.py", line 718, in lr_find
self.learn.lr_find()
File "C:\Users\bmmoh\AppData\Local\ESRI\conda\envs\deeplearning\lib\site-packages\fastai\train.py", line 40, in lr_find
epochs = int(np.ceil(num_it/len(learn.data.train_dl))) * (num_distrib() or 1)
ZeroDivisionError: division by zero
"
I'm working with ArcGIS pro 3.0.1, and have updated the different deep learning packages I'm using. I also deleted and recloned my python environment so i'm kind of at a loss. Any idea what I should do?
I have a directory consisting of 22 sub-directories. Altogether, the directory is about 750GB in size and I need this data on GDrive so that I can work with it in Google Colab. Obviously uploading this takes an absolute age (particularly with my slow connection) so I would like to zip it, upload it, then unzip it in the cloud.
I am using 7zip and zipping each subdirectory using the zip format and "normal" compression level. (EDIT: Can now confirm that I get the same error for 7z and tar format). Each subdirectory ends up between 14 and 20GB in size. I then upload this and attempt to unzip it in Google Colab using the following code:
drive.mount('/content/gdrive/')
!apt-get install p7zip-full
!7za x "/content/gdrive/My Drive/av_tfrecords/drumming_7zip.zip" -o"/content/gdrive/My Drive/unzipped_av_tfrecords/" -aos
This extracts some portion of the zip file before throwing an error. There are a variety of errors and sometimes the code will not even begin unzipping the file before throwing an error. This is the most common error:
Can not open the file as archive
ERROR: Unknown error -2147024891
Archives with Errors: 1
If I then attempt to rerun the !7za command, it may extract one or 2 files more from the zip file before throwing this error:
terminate called after throwing an instance of 'CInBufferException'
It may also complain about particular files within the zip archive:
ERROR: Headers Error : drumming/yt-g0fi0iLRJCE_23.tfrecords
I have also tried using:
!unzip -n "/content/gdrive/My Drive/av_tfrecords/drumming_7zip.zip" -d "/content/gdrive/My Drive/unzipped_av_tfrecords/"
But that just begins throwing errors:
file #254: bad zipfile offset (lseek): 8137146368
file #255: bad zipfile offset (lseek): 8168710144
file #256: bad zipfile offset (lseek): 8207515648
Although I would prefer a solution in Colab, I have also tried using an app available in GDrive named "Zip Extractor". But that too throws an error and has a dataquota.
This has now happened across 4 zip files and each time I try something new, it takes an a long time to try it out because of the upload speeds. Any explanations for why this is happening and how I can resolve the issue would be greatly appreciated. Also I understand there are probably alternatives to what I am trying to do and they would be appreciated also, even if they do not directly answer the question. Thank you!
I got same problem
Solve it by
new ProcessBuilder(new String[] {"7z", "x", fPath, "-o" + dir)
Use command line array, not just full line!
Luck!
Why does this command behave differently depending on whether it's called from terminal.app or a scala program?
I am using ubuntu 14.04.I gited the ssd branch caffe and met the problem below when I bash caffe-ssd/data/VOC0712/create_data.sh.(I named the ssd branch caffe caffe-ssd)
Traceback (most recent call last):
File "/home/lab/caffe-ssd/data/VOC0712/../../scripts/create_annoset.py", line 107, in
label_map = caffe_pb2.LabelMap()
AttributeError: 'module' object has no attribute 'LabelMap'
this is my PYTHONPATH:
lab#lab:~$ echo $PYTHONPATH
/home/lab/caffe-ssd/python
I have also added below words in the file create_annoset.py. But it doesn't seem to work.
sys.path.append("/home/lab/caffe-ssd/python")
I guess maybe problem is that the ssd-branch I've git is original version ssd. I d So I download the ssd-branch zip from github
https://github.com/weiliu89/caffe/tree/ssd
and unzip it.Then I met a new problem when I bash create_data.sh like below:
no module named _caffe
though that I could import caffe in the python shell. I have referred that this problem came up due to the $PYTHONPATH confusion,Then I add the below words in the document caffe_root/scripts/create_annosets.py:
sys.path.insert(0,"YOUR_SSD_BRANCH_CAFFE/python")
all done.
So when I was doing coding I came across this:
from hidden_lib import train_classifier
Out of curiosity, is there a way to access the function using the terminal and see what's inside there?
You can use "inspect" library to do that, but it will work only if you have the source code of the "hidden_lib" somewhere on your machine:
>>> import hidden_lib
>>> import inspect
>>> print inspect.getsource(hidden_lib.train_classifier)
Otherwise library will throw the exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\inspect.py", line 701, in getsource
lines, lnum = getsourcelines(object)
File "C:\Python27\lib\inspect.py", line 690, in getsourcelines
lines, lnum = findsource(object)
File "C:\Python27\lib\inspect.py", line 529, in findsource
raise IOError('source code not available')
IOError: source code not available
In such a case you need to decompile .pyc file first. To do that you need to go to the:
https://github.com/wibiti/uncompyle2
then download the package, go to the package folder and install it:
C:\package_location> C:\Python27\python.exe setup.py install
Now you can easily find location of the library by typing [1]:
>>> hidden_lib.__file__
Then go to the pointed directory and unpyc the file:
>C:\Python27\python.exe C:\Python27\Scripts\uncompyle2 -o C:\path_pointed_by_[1]\hidden_lib.py C:\path_pointed_by_[1]\hidden_lib.pyc
Sources should be decompiled seccessfully:
# 2016.05.07 17:47:36 Central European Daylight Time
+++ okay decompyling hidden_lib.pyc
# decompiled 1 files: 1 okay, 0 failed, 0 verify faile
# 2016.05.07 17:47:36 Central European Daylight Time
And now you can display sources of functions exposed by hidden_lib in a way I described at the beginning of the post. If you are using iPython you can use also embedded function help(hidden_lib.train_classifier) to do exactly the same.
IMPORTANT NOTE: uncompyle2 library (that I used) works only with Python 2.7, if you want to do the same for Python 3.x you need to find other similar library.
I want to be able to use Boxer as a semantic extractor inside NLTK.
I am testing with the following code:
#!/bin/env python
import nltk
x = nltk.sem.boxer.Boxer()
x.interpret("The capital of Spain is Madrid .")
The failure is the following:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/nltk/sem/boxer.py", line 83, in interpret
d, = self.batch_interpret_multisentence([[input]], discourse_ids, question, verbose)
File "/usr/lib/python2.7/site-packages/nltk/sem/boxer.py", line 140, in batch_interpret_multisentence
drs_dict = self._parse_to_drs_dict(boxer_out, use_disc_id)
File "/usr/lib/python2.7/site-packages/nltk/sem/boxer.py", line 241, in _parse_to_drs_dict
line = lines[i]
IndexError: list index out of range
From the nltk code, I found at http://nltk.org/_modules/nltk/sem/boxer.html#Boxer that in the _parse_to_drs_dict(self, boxer_out, use_disc_id) function, it does a i += 4 that I haven't been able to understand.
Am I feeding something bad to the Boxer?
Did anyone manage to make it work?
Manually debugging step-by-step, the NLTK actually gets the output from candc and boxer.
It seems that the newer version available in GitHub works seamlessly.
In the 2.0.4 code the i += 4 line is probably a bug.
In order to get NLTK working, download the source code from GitHub and python setup.py install it.
Be sure to set CANDCHOME variable to the bin/ dir of your candc and boxer tools, and the models at the previous folder (the path should be $CANDCHOME/../models).