one of the steps of my github test action for pull requests is installing 3rd party software via
- name: Install imagemagick and graphviz
run: |
sudo apt install graphviz
sudo apt install imagemagick
The package size seems to be about 15MB, see https://imagemagick.org/script/download.php. That's not too bad. But it made me wonder: if I installed a package of, say 500MB, would the github servers have to download the 500MB every time the action is triggered? That would be bad..
Yes, it will download them each time, unless you will cache it. You can find more details here Caching APT packages in GitHub Actions workflow. You can also create your own docker image with pre installed packages and use that image in your pipeline. You will also find an example in above mention topic.
Related
I have been trying to install Install-Package Google.Apis.Drive.v3 using this source with the difference that I have Ubuntu-18.04 instead of Windows.
I know it may be a simple question but I have been trying research how to do that from this morning. I installed sudo apt install nuget on my machine and have been trying to add packages or as in this case the Google.Apis.Drive.v3 package but no luck.
I went through this source which was useful, but does not carry information I was able to replicate on my Linux machine.
Also this source, this one and this one too. But also this last one is for Windows and was not very useful.
How do I install Google Apis Drive V3 via command line easily as it is documented for windows but on Ubunbtu-18.04?
Thanks for pointing to the right direction for solving this problem.
Solution
The way you install your Drive API's library is depending on the programming language you are aiming to use. These are the following commands to run depending on the different languages to interact with the API (with their respective links to the source of the setup):
Python:
pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
C#/.NET:
Create a new Visual C# Console Application project in Visual Studio.
Open the NuGet Package Manager Console, select the package source nuget.org, and run the following command:
">Install-Package Google.Apis.Drive.v3
Java:
gradle init --type basic
mkdir -p src/main/java src/main/resources
Node.js:
npm install googleapis#39 --save
For the Browser check out the steps to follow here
I hope this has helped you. Let me know if you need anything else or if you did not understood something.
NOTE: For all Ubuntu-18.04 users that wish to install via command line the correct way is: sudo dotnet add package Google.Apis.Drive.v3
I need to install Mysql-server in an Ubuntu 18 machine which do not have any internet access. There are plethora of instruction material exist on this this subject but all they require Ubuntu machine to be online.
One such documentation can be available here (quite comprehensive though)
https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-18-04
Any help on offline installation of Mysql-server will be highly helpful.
I suggest you follow this guide on how to use apt-offline. https://linoxide.com/debian/install-debian-packages-offline/
As a general guide:
You start by having apt-offline installed on both PCs, this is done by default on the desktop releases, but can easily be installed by just downloading the .deb package for your release from the packages.ubuntu.com website: https://packages.ubuntu.com/bionic/all/apt-offline/download
Then create a signature that can be put onto another PC that'll do the downloading/fetching updates and make a note that we also need mysql-server
apt-offline set offline-servers-state.sig --install-packages mysql-server
You can then use this signature on a PC connected to the internet using the same tool to check for updates and/or download the required files into a zip file
apt-offline get --bundle zip/file/location/bundle.zip offline-servers-state.sig
Once downloaded you can put this .zip back on the offline server to install the packages
apt-offline install zip/file/location/bundle.zip
You can visit https://dev.mysql.com/downloads/mysql/ from a computer that can go online.
Then, select your OS and version
Download DEB Bundle on a computer that can go online. Move the downloaded file internally to your system that cannot hit the Internet.
Your downloaded file will be a .tar. Use command tar -xvf filename.tar (See https://www.cyberciti.biz/faq/tar-extract-linux/ for command).
You will get a new directory. cd that-directory will get you in that directory. You will see a bunch of .deb files.
Install the deb files one by one using sudo apt-get install filename.deb depending on what you want to install. Other commands to install deb files can be found in this discussion https://unix.stackexchange.com/questions/159094/how-to-install-a-deb-file-by-dpkg-i-or-by-apt.
I am attempting to install closed source software from Silego, GreenPAK Designer, on a machine running Fedora 19. The supported installation packages on Silego's Website only target Ubuntu and Debian. I downloaded the .deb package and used Alien to convert to an RPM. So far so good, but a dry run of yum install showed dependency errors, which I solved by installing the necessary packages with yum:
qt5-qbase
qt5-qbase-gui
qt5-qtdeclarative
qt5-qtlocation
qwt
Now, yum installed the above libraries in /usr/lib/ but the GreenPAK RPM defaults to /usr/local/bin as the output dir. I figured I could run
sudo yum localinstall --nodeps --noscripts greenpak-designer-x.x.x.rpm
and get a successful install but I received conflict errors relating to dirs such as '/', '/usr', '/usr/bin' etc. I worked around this issue with:
rpmrebuild -pe --notest-install --replacefiles --noscripts greenpak-designer.x.x.x.rpm
and removing the offending lines in the script. It allowed me to install rpm but the software is broken because of dependency issues (not surprisingly). From the system log:
Jan 4 16:06:49 pelican gnome-session[1729]: /usr/local/greenpak-designer/bin/GP5: error while loading shared libraries: libicui18n.so.52: cannot open shared object file: No such file or directory
The machine has a /usr/lib/libicui18n.so.50
One thing I did not try is rebuilding my shared object cache with ldconfig, which sometimes solves problems with missing .so links when building from source but I don't see how that would apply in this instance (I'm not trying to link object files to libraries, rather simply trying to drop binaries in default install locations, no?)
Of course, I contacted the vendor and begged for an RPM. The contact was helpful but informed me the software folks are on a well deserved break. I thought I'd continue puttering with this in the meantime while I have time.
Any ideas? It seems the solution to this problem would be helpful when trying to install almost any closed source software targeting Debian on a Fedora box.
On my Fedora 19 system, yum update attempts to reinstall a large number packages I have previously removed. This should not happen, as the packages listed are not installed and should not be suggested by yum. How can I make yum work in the expected manner - with updates suggesting only upgrades to preinstalled packages.
Background: I have been trying out new DEs - installing and removing them as I go. Currently, I'm in a DE-less state, booting directly into a tty terminal. My system has no (or a few hidden) xfce or cinnamon packages to "upgrade", yet the package manager is suggesting 300 packages to install, totaling 600M of new install.
Terminal output gist:
https://gist.github.com/Redoubts/29400f0b98cd13120a6a#file-gistfile1-txt
Short answer - It's not possible to disallow installing any packages from the depenency chain. Either you install all of them or drop those who depends on unwanted packages.
In some cases, when the package from a dependency chain is required only during some specific stages of installation (say for execution of a pre- or post-install scripts), it's possible to remove thise package later, after the complete installation. But that's not what you want I suppose.
I am trying to setup Travis CI to deploy my repository to Openshift on a successful build. Is there a way to deploy a repository besides using Git?
Git is the official mechanism for how your code is update, however depending on the type of application that you are deploying you may not need to deploy your entire code base.
For example Java application (war, ear, etc) can be deployed to JBoss or Tomcat servers, by simply taking the built application and checking it into the OpenShift git repositories, webapps or deploy directories.
An alternative to this (and it will be unsupported), is to scp your application to the gear using the SSH key. However any time the application is moved or updated (with git) this content stands a good chance of getting deleted(cleaned), by the gear.
We're working on direct binary deploys ("push") and "pull" style deploys (Openshift downloads a binary for you. The design/process is described here:
https://github.com/openshift/openshift-pep/blob/master/openshift-pep-006-deploy.md
You can do a SCP to the app-root/dependencies/jbossews/webapps directory direcly. I was able to do that successfully and have the app working. Here is the link
Here is the code which I had in the after_success blck
after_success:
- sudo apt-get -y install sshpass
- openssl aes-256-cbc -K $encrypted_8544f7cb7a3c_key -iv $encrypted_8544f7cb7a3c_iv
-in id_rsa.enc -out ~/id_rsa_dpl -d
- chmod 600 ~/id_rsa_dpl
- sshpass scp -i ~/id_rsa_dpl webapps/ROOT.war $DEPLOY_HOST:$DEPLOY_PATH
Hope this helps