I am downloading a file from AWS S3 in a github action. In the next step (same job) I am trying to edit the file. Sometimes the file is still there, and sometimes it isn't.
Each step runs a bash script, and I check at the end of the first step that the file exists. The file is being downloaded to the $HOME directory, so the path to the file is /home/runner/my-file.json
Where should I download the file to, to guarantee that it is still there on the next step?
Just to close this off, files downloaded to $HOME are persisted between actions in the same job. I finally realised that the next step is trying to edit the file twice concurrently (I'm using some lerna scripts) and that is why it was sometimes reported as empty.
Unfortunately, you can't keep this file after finishing the action running, so to solve this, You should push this file to the repo where this action is currently running on, this could be done through git commands.
Thanks
Related
I am new to teamcity and I am trying to do the following:
I have an exe file in my project & I have a build step in teamcity to
run the exe.
When exe runs, it saves a json file in the same folder as the exe.
How can I publish this json to the artifacts in teamcity?
In your build configuration settings, on the General page, you are able to specify which file(s) or folder(s) to publish as artifacts. You can export the file(s)/folder(s) as is, or zip them. Refer to the documentation and the quick help dialog (mouse-over the little info-icon next to the text field) for syntax flavors.
Simply writing the name of the json file should be enough though. If your build has run once before already, you can click the folder-structure icon on the right to see an example of the contents from the previous run. In this box, you can simply click the content you want exported. Note: this does require a recent run.
I have added my webpage contents to /var/lib/openshift/XXXXXXXXXXXXXXXXXXXXX/app-root/runtime/dependencies/jbossews/webapps/ directory and I can see my webpage.
I have not used git repository.
I tried adding one shell script under /var/lib/openshift/XXXXXXXXXXXXXXXXXXXXX/app-root/runtime/repo/.openshift/cron/hourly directly. I dont see the script running.
I feel it has to be pushed to some service. Can this be done without git at all?
Adding example:
I have not created a git repository.
I have a index.html with just some header and body placed under /var/lib/openshift/XXXXXXXXXXXXXXXXXXXXX/app-root/runtime/dependencies/jbossews/webapps/test/.
Now I can access the website like www.test-rhtest.rhcloud.com/test/index.html
Now I have a shell script say test.sh. It is as below.
#!/bin/bash
echo date >> $HOME/app-deployments/temp.txt
I execute the script test.sh and it creates the file there $HOME/app-deployments/temp.txt.
Now I have placed the file under /var/lib/openshift/XXXXXXXXXXXXXXXXXXXXX/app-root/runtime/repo/.openshift/cron/hourly. I waited for hours to see execution, but no luck.
How should I enable this cron now?
The file SHOULD run, but you have to make sure that you made it executable (chmod +x) on the server. Also, make sure you can run it manually without getting any errors. Also, it seems you should put it into your ~/app-root/repo/... directory, instead of the runtime one.
I've created an Access database to be shared through the entire department, which I've split into a front end and a back end. Unfortunately, there's no easy way I can figure out to ensure all users are consistently using the newest version of the front end on their local machine as I add requested updates.
To overcome this, I created an install batch script that creates a shortcut on their desktop. as well as nesting the front end and an "update" batch script in a custom folder on their PC. The shortcut actually links to the "update" batch script, which then downloads the newest version of the front end (overwriting the existing one), then loads it.
Ideally, this would not download it every time and instead only downloads it if the version of the front end on the network is greater than that on the local machine. Unfortunately, I can't seem to figure out how to do this with an accdb file (though I've seen information for executable files). We are using Access 2010 and an Access 2007 filetype. I still have not figured out how to append a version number to the front end, but I'm open to including a text file as well simply to store that version number. Any suggestions?
Below is the script I currently have for the update file.
#ECHO OFF
CLS
XCOPY "\\NetworkPath\Install\*.accdb" c:\Reserved\Database /y /q
XCOPY "\\NetworkPath\Install\Update.bat" c:\Reserved\Database /y /q
CLS
ECHO Starting database...
START "" "C:\Reserved\Database\FrontEnd.accdb"
I've done the exact same thing, and solved the problem of only re-downloading the frontend when it has changed by using the xcopy command with the /d switch:
xcopy /yqd \\network\frontend.accdb frontend.accdb
Xcopy reference: http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/xcopy.mspx?mfr=true
That works, but leaves a small gap in the logic: when someone is using their local copy of the frontend, and you push a new version to the network, and then they exit the frontend and run the script again: it won't download the new version because the user's local copy will have a later modification time.
To overcome this, I actually make a copy of the local frontend and start that from the script, instead of starting the downloaded copy. That way the downloaded copy retains its original modification time and xcopy's time check works correctly. You do have to train your users though to ignore the local copies of the accdb file and only use the script.
I am playing with a repo I've cloned in nitrous.IO.
I mistakenly created a "README_UPDATE" file without the .md extension when I was planning to push my commit to github. I've tried to 'rm README_UPDATE' with no success in the command prompt; and right click file delete is not working in either nitrous.io on the web IDE or their google app extension.
I've also tried 'ls' to make sure I was in the right directory just in case anyone is asking.
I don't want to delete all my work, so is their any other options for me?
Have you tried right clicking the file and selecting 'rename'?
The source for my Jekyll-powered website lives in a git repo, but the website also needs to have a couple large static files that are too large to go under version control. Thus, they are not part of the Jekyll build pipeline.
I would like for these to simply live in an assets directory in the Jekyll destination (which is a server directory; note that I don't have have any control over the server here; all I can do is dump static files into a designated directory) that does not exist in the git repo. But, running jekyll build deletes everything in the output directory.
Is there a way to change Jekyll's behavior in this case? Or is there some other good way to handle this issue?
Not sure this addresses the specific case in the OP, but seeing as how I kept getting to this page when I finally found an answer here, I thought I'd add an answer to this question in case it helps others.
I have a git post-hook that builds my jekyll site in my webhost when I push to my host, but it was also deleting anything else that I had FTP'ed over. So now I've put anything I need to stick around in a directory (external/ in my case), and added the following to my _config.yml:
exclude: [external]
keep_files: [external]
and now files in external/ survive.
If you upload Jekyll's output directory via FTP to your server, you can use a FTP tool that lets you ignore folders.
For example, my own site is built with Jekyll, but hosted on my own webspace, so I'm uploading it via FTP.
I explained in this answer how I scripted the building and uploading process, so I can update my site with a single click.
In my case (Windows), I used WinSCP, a free command-line FTP client, for this.
If you're not on Windows, you need to use something else, but there are probably other FTP tools out there that are able to ignore folders.
To ignore your assets folder in WinSCP, you just need to put this line into the script file:
(the file which contains the actual WinSCP commands - read my other answer for more information)
option exclude "assets/"
Now you can upload your large assets folder on the server once, and it won't be overwritten/deleted when you later update your site via FTP.