Is there any existed method to package the installation of a virtual machine for KVM/QEMU?
I mean a single file which QEMU can import into it, just like ova/ovf for vmware.
Currently, I write a script with "virt-install" and iso file to install an OS.
It is OK, but is there better method? Only a single file packaging "virt-install" and iso file, or something like that.
Related
I want to import and use dataset package of python at AWS Lambda. The dataset package is about MySQL connection and executing queries. But, when I try to import it, there is an error.
"libmysqlclient.so.18: cannot open shared object file: No such file or directory"
I think that the problem is because MySQL client package is necessary. But, there is no MySQL package in the machine of AWS Lambda.
How to add the third party program and how to link that?
You should install your packages in your lambda folder :
$ pip install YOUR_MODULE -t YOUR_LAMBDA_FOLDER
And then, compress your whole directory in a zip to upload in you lambda.
What you have to do is to include the binaries needed with your lambda package.
You need to utilize pip and create an isolated environment.Your zip uploaded to lambda needs to have the python2.7/site-packages included (the ones installed with pip).
Now there are extreme cases of os-related dependencies.
This has a tricky solution.
In those cases you have to spawn an amazon linux ec2 instance in order to build/get those dependencies and package them with your lambda.
Once your lambda is packaged you can close the ec2 instance.
Check this guide if virtualenv is not enough.
This is an os dependent system file. I'm guessing that you successfully installed the Python mysql client, but you still need the system mysql client, which seems to be a different version on your system than the lambda one. While building your virtual environment on the official lambda image will definitely fix this problem, you might have some luck copying your own copy of this system file into your lambda zip file.
I found mine with
locate libmysqlclient.so.18
Note: depending on your system, the version number at the end might be different. Use the version in the error you receive.
Adding that file on the top level of my zip file with
cd \path\from\locate\to\libmysqlclient
followed by
zip -u \path\to\lambda\zip\file.zip libmysqlclient.so.18
worked for me.
I have created a Python interface to my library using SWIG. This Python interface uses numpy. All of this works correctly.
Now, I want to package this Python interface into a Python wheel. Packaging for Windows works correctly.
myext = Extension( "MyExt",
sources = ["MyExt.i"],
swig_opts=["-py3", "-I/usr/include", "-includeall"],
libraries=["mylib"],
)
On Windows, compilation occurs directly where all the sources and setup.py files are. This is not the case for Linux when building my bdist_deb (same for bdist_rpm), and here is my problem.
The file MyExt.i includes numpy.i. Therefore, I should add it as a source file of the extension. However if I do this, then setuptools also tries to run swig on numpy.i. This is not what I want. I haven't found any of the other parameters of Extension that would accept such a file.
Someone knows how to get out of this issue?
I am trying to generate a standalone executable from a single tcl file. I am using the method using tclkit.exe mentioned in http://wiki.tcl.tk/11861.
The problem is the tcl file uses 3 packages.
package require Tk
package require tcom
package require Img
I was not able to successfully add the packages in lib folder of the generated vfs folder. Whenever I click the exe it says, failed to load tcom.dll.
Btw, there are lot of different version of activestate tcl and tclkit.exe based on x86 and x64 system. I am doing the whole thing in a 64 bit win7 system. What am I doing wrong? please help.
I'm trying to create packages for some robot controller code that will support different architectures, such as i386 and armhf (for Raspberry Pi). I don't know how Debian intends this to be done. Is there a way to create a single .deb package that contains both binaries? Or must I create a separate .deb package for each architecture, which I do know how to do?
In the latter case, if I give the two packages the same package name, I can't put them both in the same repository, but if they have different names, users will have to specify which package they want to install using apt-get. Is there a solution to this problem?
You need to have different binary packages for different architectures unless what you're packaging is interpreted and not compiled.
If the package you're making is compatible with all architectures, then the Architecture: field of your debian/control file must be any. This is telling you that the package can be built in any Debian supported architecture. Then you'll just compile it and cross-compile it to i386 and armhf.
About your second question, you can. In fact, this is how is done in the official Debian repositories. The binary packages have a suffix in the file name containing the architecture. See this example. When users install your package, they won't need to specify the architecture as it's automatically detected.
I can't figure it out: do the files which are referenced in the Binary element of .wxs file get copied to the target machine, or are they resources of the install package?
They are definitely the resources of the install package. This means that they don't get installed to your application folder, but Windows Installer is supposed to extract it internally to some temporary location to address the functionality in it. And it is supposed to clean after itself. But this is definitely not visible to the end users of your installation.
At least, this is how I understand it.