Is it possible to recover deleted Jobspec script? - palantir-foundry

Is there any way to recover the file Jobspec that was once deleted? Or...I'm thinking of versions of Jobspec being tracked somewhere that's accessible by superuser?
TIA.
Some sources say that super user can manually edit Jobspec. Thinking it'd be the same for recovering deleted Jobspec file/script?

Related

Using mercurial, I added a new file and wrote code in it, then deleted that file. Can I retrieve it?

Pretty much the title. I've looked at a lot of similar questions asked here, and I can't seem to find something that applies.
Started by syncing with HEAD. Created a few new files. Filled in those files, they were being tracked at this point. I then not only deleted the files, but also removed them from being tracked (because of stupid UI). According to my understanding, those files are gone for good, but I thought I'd check with people who are smarter than me: Is it possible to retrieve them?
Mercurial does not store uncommitted changes, so if you did not commit the files then they are lost.
If you did commit them, then hg update -C will restore them (and all other files --- make sure there are no other changes you haven't committed and want to keep) to the latest commit for your working dir.

Revert MS Access file without backup

I was using an Access file a colleague of mine created. They run it without issue on their computer all the time, but when I tried running it on mine, without modifying it, I had a value prompt window come up that isn't supposed to.
I think there's some sort of auto save feature on this file because even after closing it without saving, this message shows up and I'm no longer able to run the macro within the Access file. This file is stored on a shared network drive and file history isn't enabled on this drive nor on my machine. I'm not too familiar with Access and my colleague is on leave for some time. No one else seems to know Access very well either. Is there a way than I can restore the file or the queries/macro inside it to how they were before I opened it?
The basic answer is No. If something really was changed, there is no automatic history that can be used to recover changed settings or macros.
I realize it doesn't help to say anything now, but I am very surprised if there are no backups for the shared network drive. Is there no IT personnel that can assist? Regardless of whether or not you can get the file working, I would immediately make your own manual backup of the Access database file(s). (As long as you have a backup... or even better multiple backups, Access files can simply be restored from the original using file copy and paste.)
"Autosave" is an understatement with Access, because unlike a word processing document or spreadsheet which can be held completely in memory, a database file is constantly updated. There is no in-memory context for the database file as a whole. Access will almost immediately update the file once it is opened, simply because it manages things like file locks, etc. The database may have an Autoexecute macro or other code that runs automatically, but this may be accompanied by security prompts, especially if you haven't opened it on that computer before. For standard forms, changes to data on a form are saved to disk immediately and no "Save" button is required. Certain aspects of the database file should not changed unless explicitly told to do so, and these are usually design aspects not change accidentally.

Mercurial ignored file causes abort when trying to update to previous revision

Here's my scenario: When I initially created my Mercurial repo, I used hg add to add all *.pl *.sh and *.sql scripts to the repo. I later learned how to use the .hgignore file to exclude other files from the repo. One of the files I needed to exclude was a *.sql file that is generated by a script, so it is essentially a data file that constantly changes when the script runs that produces it; thus, I added it explicitly to the .hgignore file a few revisions ago.
Today, I want to update to a prior revision before this *.sql file was added to the .hgignore, so that I can create a branch off of it. However, when I try to update the working directory to this prior version, I get the following error:
a.sql: untracked file differs
abort: untracked files in working directory differ from files in requested revision
I know that one way I could get around this problem is to delete the file before trying to update to the prior revision, either by manually deleting it or using hg update --clean --check.
That may work in this particular case, since the file is auto-generated by a script each time, and so I don't care about the data that is currently in it.
However, I'm trying to find out what is the safe way people would generally handle this situation when they decide to ignore a file set (like a set of data files that aren't auto-generated) and need to return to a previous revision before they were marked to be ignored, especially if they wanted to retain the most current content in those file sets while still being able to view earlier revisions of files that Mercurial is actively tracking.
I've also considered that you could backup the files, but I think that is only a reasonable solution if this is a one-off case. If you want the ability to hg update to previous revisions on a frequent at-whim basis, then it becomes quite tedious to backup the data each time before you update to a earlier revision (it's also not a reliable way to guarantee that others may not delete the data that isn't being tracked in the repo).
Thanks for the help.
However, I'm trying to find out what is the safe way people would generally handle this situation when they decide to ignore a file set (like a set of data files that aren't auto-generated) and need to return to a previous revision before they were marked to be ignored, especially if they wanted to retain the most current content in those file sets while still being able to view earlier revisions of files that Mercurial is actively tracking.
It depends.
If you have exclusive control over this repository, and have the practical ability to require everyone to re-clone from it, then you can use hg convert to exclude the files from the old revisions. This is by far the cleanest option, but it will change the revision identifiers (hashes) for those revisions and all of their topological descendants. This is why everyone has to re-clone; their old clones will not interact properly with the new repository.
If you can't do that, you can copy the files somewhere else (you do have backups already, right?), clobber the originals with the old versions, and then restore them from your copy. This has to be done whenever you check the files out, so it is definitely suboptimal. You may be able to make this slightly easier by keeping the files outside the repository and checking in symlinks to the files, but you'll still have to fix up the symlinks whenever you checkout an old version.
However, what you describe is not the normal use case for Mercurial. Typically, untracked files are autogenerated, or at least able to be regenerated from tracked files. The operating assumption is that untracked files are not important and can be discarded at any time. Mercurial doesn't actually do this, because that would be rude, but neither does it make any special effort to preserve them when (for example) you make a bundle of the repository.
If you need to deal with versioning of object files, it is typical to store them in a separate artifact repository or some other system. This can be more difficult to manage because you have to reunite the binaries with the source code when you do a build. But it is much more robust than keeping the binaries loose in the repository and hoping they won't get accidentally overwritten or deleted.
Another option is to collapse the binary to text and then place the text under version control. This is always possible (e.g. take a hexdump) but may or may not be practical or reasonable, depending on the file format. For a compressed file format (e.g. tarballs, most image files, etc.) the hexdump is not going to be any easier to 3-way merge than the original binary, so there's little point in it. Similarly, if the binary is huge, the hexdump will be huge too. On the other hand, if a binary is compiled from source code, it is entirely normal to store the source and discard the binary. For something structured like an SQLite database, you might try storing an SQL script which will generate the database. For a zip file or tarball, store the contents. And so on. All of these things can be regenerated using make or a similar tool whenever you check things out, and you can automate this with a repo hook.

Backup automatically

At our company we have a Google Spreadsheets which is shared by a link with different employees. This spreadsheet is saved on a Google Drive to which only I have access. The link is configured as such that anyone with the link can edit the spreadsheet since all employees need to be able to make changes to the file.
Although this is very useful, it also presents a risk in the form of data loss. If a user were to (accidentally) delete or alter the wrong data and saves the file, this data is permanently lost.
To prevent this I was wondering if it is possible to automatically have a backup created, say every day. Ideally, this backup is saved in the same Google Drive. I know I could install the desktop client and have the file backed up by our daily company backup, but it seems a bit ridiculous to install it for just one file. I'm sure there has to be another solution to this, ie with scripts.
I followed the advice of St3ph and tried revision history. Not exactly what I meant, but an acceptable solution nonetheless.

How can I commit a set of files only once in Mercurial?

I have some files I'd like to add to have them as a "backup". The thing is, I'd like to commit them only one time, and then, I'd like for Mercurial to don't track them anymore ( don't notify me if they're changed, and don't commit them on other commits ).
Basically, something like this:
hg add my_folder
hg commit -m "added first version of my_folder"
Then, after a while, the contents of that folder might change. And if I commit other files, the new version of that folder will get commited as well. This is something I'd like to avoid. Is it possible, without specifying directly which files I want to commit?
I've never seen any option in Mercurial that might allow that... but why not simply copy them elsewhere ?
I mean, what's the point of using a Version Tracking System if you don't need versioning on these items anyway ?
We ran into a similar case with binary documents ('.doc', images, etc...) and finally decided to commit them on a separate repository, dedicated to those.
I think the traditional way of doing this is to commit files named something like "file.ext.default", and just inform users that they should copy the defaults and modify the copies.
VCSs aren't backup sysytems. consider using a proper backup mechanism.
having said that you should be able to do this using hooks, there are many ways you could do this but ACLs would be an obvious one assuming a remote server