Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am developing a website for a company. However some of the css attributes which are in my project goes missing when I commit my code to the subversion repository.
So in the other end when my team mates access it, the attributes goes missing. I reassured that it was not a mistake when I commit. How can I resolve this?
Subversion commits are atomic and consistent. If some files are missing, then they were not included in your commit or were not even added to your working copy with the svn add command.
When you put new files to your working copy, you first need to tell Subversion client to begin tracking them, i.e. add them to the working copy. You need to do this using the svn add command.
Before you commit, you need to examine the status of your working copy with the svn status command. It will help you find out whether there are any unversioned files or irrelevant changes.
Read SVNBook | Basic Work Cycle.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 days ago.
Improve this question
For a some time now I'm thinking about a way to setup a continuous delivery pipeline and don't seem to get my head around it.
I want to auto-create a Github release with a corresponding tag. Plus I want to update the version in a file inside the repo (/docs/antora.yml). The version in the antora.yml should be the same as the tag which I want to create.
Since the source of truth always is the main branch on Github I only want to create tags in the remote repo. Tagging locally and pushing the tag to the remote repo is not an options! Ideally I would like a successful Pull Request to be the trigger. But (as far as I understand) then I always have to create a branch. Plus I don't want every PR into main to create a new release and tag. I want to handpick the PRs.
One way to achieve this would be to create a dedicated release/v1.2.3 branch, run everything from there and auto-create a tag and release. Then a release pipeline could listen to the tag-creation and do the actual e.g. deployment to DockerHub.
What I don't like about this is, that I need a release-branch which kind of moves me further away from working trunk-based. And I need two pipelines: a CD-pipeline which would only handle organizational stuff and a release-pipeline which does the technical deployments.
I would prefer to do everything in a single CD pipeline based on the main branch.
Now I'm very interested in your opinions. Are my thoughts valid or is this some kind of anti-pattern? Do you know some best practices? Maybe someone has an example implementation? I think I can handle writing the actual pipeline code. I'm just not sure how to organize my process. By the way: I already have a CI pipeline working. So to me CD is the logical next step
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I would like to display a code snippet from my own public GitHub source repo in my own web site, without having to make a copy. This is for a tutorial to illustrate some programming concepts. So not to run it but to pretty print it, as it were.
I bet this is something that many people need. I am looking for a lead on how to do this.
You could use the GitHub API to accomplish this.
See the Get Contents endpoint.
Yes, you can do that either by publishing your repo on npm and then referring it as npm dependency in projects package.json.
or 2nd , you can directly refer GitHub repo as dependency in package.json.
Hope this gets you in right direction.
thanks
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Is it possible to change the template that hg log uses by default? I would like to derive a such template that would look like the default but would use mailmap function to show commit author instead of the original recorded author.
Yes, you can do that via the [alias] section in an applicable .hgrc file. So if you know how to create an appropriate template, it's easy (I don't know what mailmap output looks like, so this here is just an example how to tackle it in .hgrc):
[alias]
log = log --template="{date|isodate} {author}\n\t{desc|tabindent}\n\n"
The main issue would be where to get the committer info from - a property mercurial doesn't record by default (that is author and committer are the same). Probably there do exist extensions which allow that distinction - do you happen to use one?
Additionally: I would recommend to NOT overwrite a default command output, but to define a new command like
[alias]
mlog = (whatever you need here)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Can you use mercurial patch queues with BitBucket after it has got it's new design?
Before you could press on the "Patch queue" link in a repository to create a patch queue, but that link seems to be gone now. Am I missing something?
I am using mercurial to manage a set of patches I need to run some software on my own machines, in short the method outlined here. I've been using this method for some time and I've also wondered if there's an easier way to do it with git?
It seems that patch queues are treated as forks in Bitbucket now. To create a patch queue click on the Fork button then click on the create a patch queue link on the top right of the fork page.
You can then administer the patch queue by clicking on the Fork link on the Overview page and managing it as you would any other repository.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
So I have a dependency, actually two dependencies to which I'd like to make changes either right now like fixing JBSEAM-3424 or potentially in the future. The coding is not an issue - I'm capable of making the change - and I'm not seeking to fork the community project, just to have a local version as recommended by Will Hartung to get some work done.
My concern is that issues of process will come up and bite me further down the line. So SO what can I do to ensure I manage this properly. What best practices are there?
Some more specific sub-questions:
Should I change the artifact names?
How choose group artifact and
version names?
Should I import the whole source tree
or be selective?
What if I can't get the build system
working in full - should I scale it
down or try to keep it close to the
original?
Should I change the artifact names?
How choose group artifact and version names?
Keep the groupId and artifactId of the module(s) you change the same, but use a qualifier on the version to ensure that it is obvious it is a non-standard version, for example 1.0.0-simon. This is pretty common practice.
Should I import the whole source tree or be selective?
Update based on your comment: Personally I'd only add the artifacts I've changed to my local source repository. If you change another artifact later then add it to your SCM then.
What if I can't get the build system working in full
Worry about that when it happens. If the project is built with Maven it should be straightforward for you to build only the artifacts you need. If it uses an uber-ant build which you can't get working with your changes, then consider paring the build down.