Searching the web for a graphic tool to represent function and method call dependencies, it seems that Pyan3 is a good one, if not the only one.
I have not found the installation instructions in this Github link: https://github.com/davidfraser/pyan
Could someone help?
Pyan3 maintainer here. As of today, pyan3 is on PyPI, so it can be installed via pip install pyan3. It installs a pyan3 script to easily call it from the terminal.
The current situation is that my repo gets the latest developments and fixes for Pyan3 as long as I'm maintaining it, and changes are occasionally (but not very often) pushed to davidfraser's repo via a PR, when we both feel like dealing with one. Additionally, he's keeping the final version of Pyan2 archived with the tag pre-python3.
I think there are currently some bugfixes in my repo which have not yet been pushed, particularly with regard to compatibility with Python 3.6 and later.
If you run into problems with Pyan3, please open an issue here.
Just clone the repo to install.
There is a 'visualize_pyan_architecture.sh' script you can look at for an example of how to run it. Within this, change the path to .py relative from where the script is located to try different .py files, you can have more than one and it will relate them in the graph produced. Beware trying *.py - I have found it can fail.
A little experimentation got me the results I wanted.
I found it best to export as .yed and use it's auto layout to make it readable. Then other layouts like orthogonal and radial really can give insight into complex projects.
If you use dot you may need to do something like:
sudo apt-get install graphviz
To get the dependencies.
Related
I am an applied mathematician and I have recently joined a project that involves the development of production code for our scientific application. The code base is not small and it's deployed as part of a web application.
When I joined, the code was miraculously maintained without a revision control system. There was a central folder in a server and researchers would copy from it when they needed to work with the code. Inside this root directory there was a set of directories with different versions of the code, so people would start working on the latest version they found and create a new one with their modifications.
I created a Mercurial repository, added all code versions to it and convinced everyone to use it. However, since moving to Mercurial, we have felt little if any need to upgrade version numbers, even tough using hg copy allows us to keep revision history.
Here's where I need your advice on best practices of maintaining this code base. Does it make sense under a RCS to keep folders with different versions in a repo? If we keep a single copy of our code in the repo, what's the most common way to track versions? The README files? Should we keep snapshots of the code outside the repo specifying versions? Does it make sense to keep things as they are? What strategies do you use?
Our team is a bunch of scientists and no one has experience on how to maintain such a repo, so I'm interested in what is commonly done.
If you are going to use a version control system, forget about those version folders. Completly. Mercurial will do that for you, the repository is a complete history of all files of the project.
A common way to track version numbers is with tags. You assign a tag with the version number to a changeset.
To help you, as a "getting started guide" in version control, I suggest this book: Version Control By Example. It's free, and it starts from the beginning, it talks about CVCS, DVCS, fundamentals, what a repository is, basic commands, etc. It has also some interesting analogies, like the 3D file system: Directories x Files x Time. The book is fun and easy to understand, I highly recommend it.
I also recommend some GUI software like TortoiseHg. In daily usage, I spend most of the time in the console, but the GUI is very handy specially in the beginning when you still don't know all the commands. And the best part is the graph, you have a visual feedback of what is going on.
This is a good and quick introduction to Mercurial, it even starts out by talking about how using folders to keep different versions is not so great.
I think you're probably on the wrong track if you are using the hg copy command, I've never needed it ;)
The tutorial teaches the command line version of hg, which I personally prefer. When you need a better overview of your repository, you can run "hg serve" and open localhost:8000 in your web browser. I prefer that over TortoiseHG, but I realize that many people want a pure GUI tool.
At work we're moving from no SCM to Mercurial. It's a bit of a learning curve, but after messing with it for two days I definitely feel more comfortable with it.
I still have one big, unresolved question though in my mind: Once code is finished, how do we handle the actual deployment?
Should we be running a copy of Mercurial on the production (live) server? Or should we set rsync or something up to sync from the repo to the web directory? What's the best practice here?
If we do go w/ just pointing apache to the repo, I assume this is okay as long as we're careful not to hg update to a different, non-stable branch? That still seems a little dangerous to me though. Is there some way to force it to only switch to certain builds?
Or is pointing apache to the repo just a terrible idea and I should be doing something else instead?
On a related topic, I've also heard some talk about putting any upgrade scripts (such as schema changes for MySQL) under version control so they can be ran when the version is deployed. But how would that even work as part of the workflow? I wouldn't want to keep it w/ everything else, because it's a temporary one-time use script...
Thanks for any advice you guys can give.
I recently discovered the hg archive command, so I think we'll go w/ this instead. I've written a bash script that changes to the head of the 'production' branch then archives it to a predetermined destination. Seems to work.
I'd still appreciate any feedback you guys have as to whether this is a good idea or not.
I think pointing apache to the repo is definitely a bad idea, hg archive is ok if all you want is to take a snapshot of the dev files.
I find my development source files and a deployed application (even for a web app that doesn't need compiling) are usually very different, the latter being derived from a subset of the former.
I tend to use a shell script or a even a Makefile to "build" a deployed application in a subdirectory of the development directory, this could just be creating a directory tree and copying necessary files or could include compressing scripts etc.
This way you have to make a conscious decision whether or not to include a file in the deployed version, thus helping prevent accidentally leaving development utility files in an online application that could cause a security risk.
The only part mercurial plays is, for a major release I create a new named branch (eg: 1.5), development continues on the default branch. Subsequent bug fixes or patches can be transplanted to the release branch if necessary and if a bug fix release is made I tag the release branch with the new version (eg: 1.5.1).
I use Mercurial for game development, and I'm trying to use the LargeFiles extension included in Mercurial 2.0 to keep track of large binary assets. Unfortunately there isn't a whole lot of documentation on the extension, so I'm not sure how people are expected to use it.
For example, is there any way to safely clean out the .hg/largefiles directory? If I'm on the tip revision, and expect to always have internet access, then I don't need the old versions of largefiles cluttering up the repository, since that's the whole point of using the LargeFiles extension.
Also, how do I have more fine-grained control over where the largefile store is? I can only assume that it's created somewhere on the computer that ran hg init, but I have no idea about the details.
Thanks!
I don't have any guidance on how to safely clean out the .hg/largefiles directory.
Largefiles Store
The largefiles store seems to be stored, by default, at the one of following locations:
Windows: C:\Users\Username\AppData\Local\largefiles
OSX: /Users/username/Library/Caches/largefiles
Linux: (This is my best guess)
/home/username/largefiles
or /home/username/.cache/largefiles
User Configured:
This, however, can be changed in the global settings file using the usercache setting as follows:
[largefiles]
usercache = c:\path\to\largefiles\cache\
Note: This is not documented yet. This makes me wonder if it is subject to change.
Sources:
Largefiles Extension Documentation
User cache paths - https://www.mercurial-scm.org/repo/hg/file/41453d55b481/hgext/largefiles/lfutil.py (lines 84-103)
Undocumented largefiles.usercache setting - https://bz.mercurial-scm.org/show_bug.cgi?id=3088
I'm just posting this for anyone else coming into the thread from a search.
There's currently an issue using the largefiles extension in the mercurial python module when hosted via IIS. See this post if you're encountering issues pushing large changesets (or large files) to IIS via TortoiseHg.
The problem ultimlately turns out to be a bug in SSL processing introduced in Python 2.7.3 (probably explaining why there are so many unresolve posts of people looking for problems with Mercurial). Rolling back to Python 2.7.2 let me get a little further ahead (blocked at 30Mb pushes instead of 15Mb), but to properly solve the problem I had to install the IISCrypto utility to completely disable transfers over SSLv2.
What's the best way to move a Visual Sourcesafe repository to Mercurial (I'm interested in retaining all history)?
While I haven't made that particular conversion, I have gone from VSS to SVN using (IIRC) this script. You'll probably want to look into tailor and do a search for vss2hg. Also keep in mind that it may make sense to go through an intermediate step like vss2svn + svn2hg or similar.
The primary bit of advice I'd give though is: script the conversion so you can re-run it easily. That will let you run nightly conversions from VSS to Hg and make sure that everything is converting correctly before you pull the trigger on it.
I am the author of the vss2hg.pl script and have used it to move many projects from VSS to Mercurial. It has one or two minor bugs where some comments are not completely converted but I haven't seen any other issues. It converts complete history and works-around a problem with VSS where a user's PC clock can affect the order in which changes appear to be made.
A version of the script is available here.
I used the vss2hg.pl script from here. It is a Perl script, so you need to install ActivePerl first.
It worked great, but I ran into a problem with the dates. It turns out that the script supports three kinds of date formats. By default it is set to the UK date format (in line 547). The other two date formats are commented out in the code. After enabling the US date format, the script converted my SourceSafe database without a problem.
The Mercurial wiki has this page, which might be of interest: https://www.mercurial-scm.org/wiki/SourceSafeConversion. I've never used Visual source safe, so I don't have any personal experience with it.
I also found a mail from Patrick Mézard about the subject, but unfortunately he writes that a VSS converter will be difficult. He also talks about converting to Subversion first, and then from Subversion to Mercurial. I guess that means that there are VSS -> SVN converts out there. You can probably google that yourself.
I have done a conversion from SourceSafe to Mercurial for a client. I first converted the SourceSafe database to a Subversion repository and then from Subversion to Mercurial using the hg convert extension. See my blog post for details.
I just tried using vss2hg and ran into a problem that it only pickup up and converted 1 user. This means all my changesets etc will not be accurate as I won't be able to see who did them. Is this because I've not pre-setup all the required users in hg?
So I'm pretty new to version control but I'm trying to use Mercurial on my Mac to keep a large Python data analysis program organized. I typically clone my main repository, tweak the clone's code a bit, and run the code on my data. If the changes were successful I commit and eventually push the changes back to my main repository. I guess that's a pretty typical workflow under version control.
My problem is that my code is run on the command-line, with several command-line arguments that refer to data files in the current working directory (and I have many such directories I need to test the code in, and they're outside of version control). So before using Mercurial I just kept my code in one ~/bin directory which was part of my PATH environment variable. Now, with version control, I need to either (1) after each edit, copy my current clone's executables to the ~/bin directory before running the code on the command line, or (2) each time I clone my code, add my current clone's path to the PATH, or (3) specify the entire/path/to/my/programs on the command line each time I run the code. None of these are very convenient, and I'm left feeling like there must be an elegant solution that I just don't know. Maybe something involving Mercurial's hooks? I want my under-revision code to be runnable on the command line between commits, so this seemed to rule out hooks, but I don't know... Many thanks for any suggestions!
Ry4an's answer is good if you want to continue with the multiple-clones workflow. But it's also worth being aware that Mercurial's powerful enough to allow you most of the benefits of that workflow without ever leaving your single "main" repo. I.e. you can create branches (named or anonymous) for experimental features, easily "hg update" to whatever version of the code you want to test, even use the mq extension to prune branches that didn't work out.
What I do in such a case is set up a two deep chain of symlinks to my binary in my current clone. For example I'll have:
/usr/bin/myappname
which is a symlink to
/home/me/repos/CURRENT/bin/myappname
where /home/me/repos/CURRENT is a symlink to whatever my current working clone is, for example:
/home/me/repos/myproject-expirment
After setting up the initial /usr/bin/myappname symlink all I have to do is update the CURRENT symlink when I create a new clone on which I'm working.