How to automatically copy autograding files onto the students' repository? - github-classroom

I am teaching a course in C++. For automatic homework grading, I use a system that works as follows:
Clones the student's repository from GitHub;
Copies the test files from my repository on top of the student's code;
Runs the tests, computes the grade and records it.
Now, I would like to switch to using Github Classroom, but I do not understand how their autograding feature works. In particular, when I try to add a test-case, I can enter the test name and command, but there is no place to enter the files with all the tests:
I thought that maybe I have to put my tests in the "template repository" that is given to the students. The problem is that students can delete tests that they do not manage to pass, so that they get 100. With 250 students and over 400 tests per exercise, it is nearly impossible to detect such deletions.
Is there a way to tell Github Classroom to copy my files on top of the repository submitted by the student, so that I can be sure that my tests are the ones that are executed?

Related

Revert Azure Devops Repo to unmerged state if Pipeline Unit Tests fail

We have an Azure Pipeline that merges into a repository that converts .json files representing customer orders into C# objects. Naturally, if the design or naming of these C# objects ever changes, the old orders will become unusable, so we run a script 'Migrating' all these outdated .jsons to conform to the new model.
Our current pipeline that merges dev into production Migrates our .jsons, and we run a PowerShell unit test script after the pipeline's completion to ensure that the .jsons have successfully Migrated. We'd like to place this test into the pipeline itself, but there are two conditions we'd prefer to meet.
If the Test fails, not only abort the merge, but revert the .jsons to their un-Migrated versions.
Give us the option to continue the merge anyway, in the event that the website encounters an error so critical and urgent we are willing to bear the loss of a few quotes.
Are these conditions feasible?
According to your description, you may consider using Build validation as the Branch policies and settings.
Basically, let's assume your production code is in the Production branch; then you can create a Dev branch and push your new commits into the Dev branch. When setting the Build validation policy on the Production branch, the PR request will not be completed if the build fails, which contains the unit test. Therefore the new code from Dev branch will not be merged into the Production branch.
In the meanwhile, other branch policies may also help you with the code version control. Hope the following documents can help as well.
Require a minimum number of reviewers
Check for linked work items
Check for comment resolution
Limit merge types

Sync Mercurial repositories with two central servers without connection

I'm new to mercurial and I have problems with the solution we're working to implement in my company. I work in a Lab with a strict security environment and the production Mercurial server is in a isolated network. All people have two computers, one to work in the "real world" and another one to work in the isolated and secure environment.
The problem is that we have other Labs distributed around the world and in some cases we need to work together two or more Labs in a project. Every Lab has a HG server for to manage their own projects locally, but I'm not sure if our method to sync common projects is the best solution. To solve that, we use a "bundle" to send the news changesets from one Lab to another. My question is about how good is this method because the solution is a little bit complicate. The procedure is more or less that way:
In Lab B, hg pull and update to be sure about the last version in local folder.
Ask the other about the "hg log", to see what are the last common changeset.
In Lab A: hg pull and update to be sure about the last version in local folder.
In Lab A: Make bundle, "hg bundle --base XX project.bundle" (where XX is the last common changeset).
Send it to Lab B (with a complicated method due the security normative: encrypt files, encrypt drives, secure erases, etc).
In Lab B: "hg unbundle projectYY.bundle" in the local folder.
This process creates two heads, that sometimes force you to make merges.
Once the changesets from Lab A are correctly implemented at Lab B, we need to repeat the process in the opposite direction, to implement the evolution of the project in the Lab B to the Lab A.
Could anyone enlighten me the way to find the best solution to get out of this dilemma?
Anyone have a better solution?
Thanks a lot for your help.
Bundles are the right vehicle for propagating changes without a direct connection. But you can simplify the bundle-building process by modeling communication locally:
In Lab A, maintain repoA (the central repo for local use), as well as repoB, which represents the state of the repository in lab B. Lab B has a complementary set-up.
You can use this dual set-up to model the relationship between the labs as if you had a direct connection, but changeset sharing proceeds via bundles instead of push/pull.
From the perspective of Lab A: Update repoA the regular way, but update repoB only with bundles that you receive from Lab B and bundles (or changesets) that you are sending to Lab B.
More specifically (again from the perspective of Lab A):
In the beginning the repos are synchronized, but as development progresses, changes are committed only to repoA.
When it's time to bring lab B up to speed, just go to repoA and run hg outgoing path/to/repoB. You now know what to bundle without having to request and study lab B's logs. In fact, hg bundle bundlename.bzip repoB will bundle the right changesets for you.
Encrypt and send off your bundle.
You can assume that the bundle will be integrated into Lab B's home repo, so update our local repoB as well, either by pushing directly or (for assured consistency) by unbundling (importing) the bundle that was mailed off.
When lab B receives the bundle, they will import it to their own copy of repoA-- it is now updated to the same state as repoA in lab A. Lab B can now push or pull changes into their own repoB, and merge them (in repoB) with their own unshared changesets. This will generate one or more merge changesets, which are handled just like any other check-ins to lab B's repoB.
And that's that. When lab B sends a bundle back to lab A, it will use the same process, steps 1 to 5. Everything stays synchronized just like they would if the repositories were directly connected. As always, it pays to synchronize frequently so as to avoid diverging too far and encountering merge conflicts.
In fact you have more than two labs. The approaches to keeping them synchronized are the same as if you had a direct connection: Do you want a "star topology" with a central server that is the only node the other labs communicate with directly? Then each lab only needs a local copy of this server. Do you need lots of bilateral communication before some work is shared with everyone? Then keep a local model of every lab you want to exchange changesets with.
If you have no direct network communication between the two mercurial repositories, then the method you describe seems like the easiest way to sync those two repositories.
You could probably save a bit on the process boilerplate on getting the new changesets which need bundling, how exactly depends.
For once, you don't need to update your working copy in order to create the bundles; it suffices to just have the repo, you don't need a working copy.
And if you know the date and time of the last sync, you can simply bundle all changesets added since that time, using an appropriate revset, e.g. all revisions since 30th March this year: hg log -r'date(">2015-03-30")' Thus you could skip a lengthy manual review process.
If your repository is not too big (thus fits on the media you use for exchange), simply copy it there in its entirety and do a local pulls from that exchange disk to sync, skipping those review processes, too.
Of course you will not be able to avoid making the merges - they are the price you have to pay when several people work on the same thing at the same time and both commit to their own repos.

Commit based view of Jenkins builds

I would like to be able to present a view of Jenkins builds similar to the buildbot console view. With Jenkins out of the box, there appears to be really no good way to associate a commit with a build. You have to access the specific built to determine what commit it was building.
I would like to be able to show status on what commits have been tested in a particular branch, so we know if a commit was skipped or if the latest commit has not yet been tested.
I tried using the Jenkins API for this, but I found that I could only see the SHA1 hash for a git commit via the build itself, i.e. via http://server/job/job-name/388/api/json. So, the only way I can see to take a commit and find builds for it is to iterate through every build in a job and retrieve its associated build info. This is certainly not going to be efficient and fast. Is there another way to do it?
Imperfect Answer: put the "revision number" you care about in the package name of all related artifacts, and use the "fingerprint" feature.
For example: my "product package" artifacts have a revision number, and if I carried that through to the "test package" artifact (which includes the unpacked product artifact) you would be able to track that revision number via the "artifact/fingerprint" feature, and show which test jobs used it. Below, you can't tell with a single click which test used which "commit."

Simplest workflow for non-developers using mercurial, working on different files, without having to think about merging?

I currently use SVN for a number of things that aren't exactly code, for instance xml files, report templates, miscellaneous files, etc. I have several non-developers who are comfortable using TortoiseSVN for this. They typically work as follows:
Person A - does an SVN Update on the folder of interest to them. Or perhaps just on a single file.
Person A - edits whichever file(s) they're working on. Perhaps add or remove files.
Person B - someone else is probably working on different files at this point
Person A - does an SVN Commit to save their changes to the repository.
Very occasionally they'll hit conflicts where more than one person has edited a file. Almost always this is just because they forgot step #1. Because they're always working on separate files, there are (almost) never real conflicts. As long as they do step #1 first everything works fine.
I'd like to move to Mercurial, however something holding me back is the prospect of having do 'merge' all the time, because Mercurial looks at the state of the entire repository, not just the files of interest at a particular time. e.g. the workflow would be like this:
Person A - does a pull and update on the repository. (let's assume there are no local changes so this is straightforward).
Person A - edits whichever file(s) they're working on. Perhaps add or remove files.
Person B - someone else edits, commits, and pushes a different file at this point
Person A - commits changes. Tries to push. Gets an error about multiple heads.
Person A - does a pull and update. update doesn't work: merge required.
Person A - does a merge. If using TortoiseHg it's a bit confusing working out what to click on to do the merge. I guess this is simpler on the command line, provided there are no complications.
Person A - commits the merge.
Person A - pushes the changes.
My resistance is that there are more steps, and the merge step is somewhat hard to get your head around if you're not a developer. Is there a way I can put these steps together to make the process nice and simple?
"Very occasionally they'll hit conflicts where more than one person has edited a file. Almost always this is just because they forgot step #1. Because they're always working on separate files, there are (almost) never real conflicts. As long as they do step #1 first everything works fine."
If this is the case why do you want to use a DVCS? Mercurial is great, but the benefits of a DVCS come from the ability to merge and fork and the ease of doing either, if your workflow requires neither why would you want to switch toolset?
Sounds like the rebase extension might work for you. The workflow becomes:
hg clone
make changes
hg commit
hg pull --rebase
hg push
The local revisions get "rebased" onto the latest tip on pull, which avoids the merge.
One possible approach is to have a point person who does all the real work of merging. I'm not a big fan of letting everyone push to one shared repos, expecially if they don't know what they are doing. An alternative approach is that A has local repos A, B has local repos B, and there is repos S, which combines A and B. Then, don't let A or B push to S. Instead let an expert pull from A and B, and do the merging in S. Then A and B never have to push to S. If they coordinate with the expert, then he/she will already have merged their changes into S by the time they pull updates from S, and so A and B will not have to merge either when pulling. This is actually the default mode in which DVCS works, since by default all repositories are read-only except by their owner.

Moving from Subversion to Mercurial - how to adapt the workflow and staging/integration systems?

We got all psyched about from from svn to hg and as the development workflow is more or less flushed out, here remains the most difficult part - staging and integration system.
Hopefully this question goes a bit further then your common 'how do I move from xxx to Mercurial'. Please forgive long and probably poorly written question :)
We are web shop that does a lot of projects(mainly PHP and Zend), so we have one huge svn repo, with like 100+ folders, each representing a project with it's own tags,branches and trunk of course. On our integration and testing server(where QA and clients look at work results and test stuff) everything is pretty much automated - Apache is set to pick up new projects automatically creating vhost for each project/trunk; mysql migration scripts right there in trunk too and developers can apply them through simple web-interface. Long story short our workflow is this now:
Checkout code, do work, commit
Run update on the server via web interface(this basically does svn up on server on a particular project and also run db-migration script if needed)
QA changes on the server
This approach is certainly suboptimal for large projects when we have 2+ developers working on the same code. Branching in svn was only causing more headaches, well, hence moving to Mercurial. And here is where the question lies - how does one organize efficient staging/integration/testing server for this type of work(where you have many projects, say single developer could be working on 3 different projects in 1 day).
We decided to have 'default' branch tracking production essentially and then make all changes in individual branches. In this case though how can we automate staging updates for each branch? If earlier for one project we almost always were working on trunk, so we needed one DB, one vhost, etc. now we potentially talking about N-databases per project, N-vhost configs and etc. Then what about CI stuff(such as running phpDocumentor and/or unit tests)? Should it only be done on the 'default'? On branches?
I wonder how other teams solve this issue, perhaps some best practices that we're not using or overlooking?
Additional notes:
Probably worth mentioning that we've picked Kiln as a repo hosting service(mostly since we're using FogBugz anyway)
This is by no means the complete answer you'll eventually pick, but here are some tools that will likely factor into it:
repositories without working directories -- if you clone -U or hg update null you get a repository with no working directory (only the .hg). They're better on the server because they take up less room and no one is tempted to edit there
changegroup hooks
For that last one the changegroup hook runs whenever one or more changesets arrive via push or pull and you can have it do some interesting things such as:
push the changesets on to another repo depending on what has arrived
update the receiving repo's working directory
For example one could automate something like this using only the tools described above:
developer pushes five changesets to central-repo/project1/main
last changeset is on branch 'my-experiment' so csets are automatually re-pushed to optionally created repo central-repo/project1/my-experiment
central-repo/project1/my-experiment automatically does hg update tip which is certain to be on the my-expiriment branch
central-repo/project1/my-experiment automatically runs tests in its working dir and if they pass does a 'make dist' that deploys, which might set up database and vhost too
The biggie, and chapter 10 in the mercurial book covers this, is to not have the user waiting on that process. You want the user to push to a repo that contains possibly-okay-code and the automated processed do the CI and deploy work, which if it passes ends up being a likely-okay repo.
In the largest mercurial setup in which I've worked (20 or so developers) we got to the point where our CI system (Hudson) was pulling from the maybe-ok repos for each periodically then building and testing, and handling each branch separately.
Bottom line: all the tools you need to setup whatever you'd like probably already exist, but gluing them together will be one-off sort of work.
What you need to remember is that DVCS (vs. CVCS) introduces another dimension to versioning:
You don't have to rely anymore only on branching (and get a staging workspace from the right branch)
You now have with DVCS the publication workflow (push/pull between repo)
Meaning your staging environment is now a repo (with the full history of the project), checked out at a certain branch:
Many developers can push many different branches to that staging repo: the reconciliation process can be done in isolation within that repo, in a "main" branch of your choice.
Or they can pull that staging branch in their repo and test things out before pushing back.
From Joel's tutorial on Mercurial HgInit
A developer don't necessary have to commit for other to see: the publication process in a DVCS allows for him/her to pull the staging branch first, reconcile any conflict locally, and then push to the staging repo.