Plone/SQLAlchemy(?) - How can I import a python package (i.e. sqlalchemy) in a module in a subpackage? - sqlalchemy

I am trying to import sqlalchemy in a module in a subpackage.
Here is my folder layout
PloneInstance
my.package
my
package
subpackage
In the buildout.cfg file of the root folder, I add "sqlalchemy" to the eggs.
In my.package, in configure.zcml, I add:
In the subpackage, I have a blank __init__.py file, a configure.zcml file, and a file called mymodule.py
In mymodule.py I have a line for importing sqlalchemy
import sqlalchemy
Unfortunately, I am getting an error when I try to run an instance:
ImportError: No module named sqlalchemy
I'm assuming I am missing a step. How do I properly import python packages?
Thank you in advance. I apologize if my terminology is off.
Edit:
The module in question I am importing from turned out to be zope.sqlalchemy.
I accidentally overlooked this because prior to moving files to a subpackage, the import statement for zope.sqlalchemy was working without adding zope.sqlalchemy to the eggs section of the buildout.

Look in the setup.py file at the top directory of your package. You'll find a section like:
install_requires=['setuptools',
# -*- Extra requirements: -*-
],
In place of the "Extra requirements' comment, put a comma-separated list of strings specifying your package's requirements. You may even specify versions.
Do not add standard Plone packages to the list. They're taken for granted.
Re-run buildout after specifying your requirements. The result is that the new install requires will be added to your Python environment when you start Plone.

Related

name 'nltk' is not defined

The nltk module is running with other libraries in the corpus folder.
My Code
I've already tried putting 'import nltk' at first but it is still the same, and also I've tried 'from nltk.tokenize import 'PunktSentenceTokenizer'. I don't know why the Python shell can't find the definition of the nltk. How should I address this? I am still learning how to write and code python.
First, install the nltk package by typing...
pip install nltk
Then you need to import it...
import nltk
You misspelled the name of the package in your file, you have used ntlk instead of nltk
change
tagged = ntlk.pos_tag(words)
to
tagged = nltk.pos_tag(words)

No module named _caffe

_caffe.so is present in the caffe/python/caffe folder
Have set the path variable as export PYTHONPATH=/home/itstudent1/caffe/python:$PYTHONPATH.
make pycaffe was also successful.
I am not understanding what else might be the cause for this error. I am able to import caffe in python.
File
"/home/itstudent1/MajorProject/densecap-master/lib/tools/../../python/caffe/pycaffe.py",
line 13, in
from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \ ImportError: No module named _caffe
It seems like you have two versions of caffe:
one in /home/itstudent1/caffe and another in /home/itstudent1/MajorProject/densecap-master.
While the first version is built and compiled, the later is not and your import looks for _caffe.so in the later.

how to prepare cython submodules

I have three questions, but related, and I not getting how can I split them well. I've found many information about those issues, like submodule extension, hierarchy, about an empty __init__.py file, and how to cythonize multiple pyx files. But when I try them together I cannot make them work.
I've prepared an small repo thinking in place together code samples of issues solved. I've check even the code of some of the projects listed that uses cython, but still not getting how to does three things at the same time.
Empty __init__.py file:
In a project (where all the files are pyx (and pxd ifneedbe)), with a __init__.pyx that includes all of them, when there is a __init__.py file, then import it doesn't load the ".so" but the empty init.
cythonize multiple files:
When instead of prepare a __init__.py that includes all the elements of a module. Like:
cythonize(['factory.pyx', 'version.pyx'])
the resulting ".so" import raises an exception:
>>> import barfoo
(...)
ImportError: dynamic module does not define init function (PyInit_barfoo)
It would be related with the previous question if it is necessary to write something in __init__.py.
Submodule:
In fact, this is the main question. The singleton.pyx would part of a submodule, lets say utils to be used from other elements in the module.
For the sample there is a submodule (simply called subm) added in the setup.py as another extension. I've placed earlier than the main one (I don't know if this really does any difference, I didn't see it).
>>> import barfoo
>>> import barfoo.subm
(...)
ImportError: No module named 'barfoo.subm'
Separately, those recipes work, but together I cannot. The "__init__.py" seems to be necessary when there is a mix of "py" and "pyx" files. The examples explain how to cythonize with multiple files, but don't include the last key point for the import. And the submodules doesn't complete about how they can be imported from one place or another (import submodules when import the base one, or optional imports when they are specified).
Thanks to the comments from oz1 and DavidW, I've got the solution. Yes, those three things come together.
Very important is the order when import in the setup.py. Even the PEP8 doesn't say that the imports should be alphabetically sorted, there are other guide lines (like reddit's) that does and I usually follow:
When import first cythonize and then setup, will cause that when cythonize(find_pyx()) is called, the result will be a list of distutils.extension.Extension objects.
from setuptools import setup, find_packages
from Cython.Build import cythonize
setuptools must be imported before cython and then the result of cythionize() will be a list of setuptools.extension.Extension objects that can be passed to the setup() call.
Important to understand the meanings of the __init__'s:
All the __init__.pyx files with includes has been removed and each .pxy file produces its own .so binary.
The main module and the submodules will exist as long as their directories contain the __init__.py file like happen with a pure python code.
In the example I've linked, the file barfoo/__init__.py is not empty because I want that import barfoo provides access to elements like version() or Factory(). Then, this __init__.py is who imports them like a normal pure python.
For the submodule:
Similar for the submodule and its own __init__.py file. In this example the import barfoo will do a from .factory import Factory, who will call from barfoo.subm import Bar, then the subm will be available.
But if the submodule is not imported in this secondary way, the user will have access to it with calls like import barfoo.subm.
Last night I saw your question, and made a simple example according to the wiki. But that question was deleted quickly.
Here is the example: https://github.com/justou/cython_package_demo
Make sure the settings of C compiler and python environment is correct, compile the pyx files by run:
python setup.py build_ext --inplace
Usage is the same as python package:
from dvedit.filters import flip, inverse, reverse
flip.show() # print: In flip call Core function
inverse.show() # print: In inverse call Core function
reverse.show() # print: In reverse call Core function
BTW, there is no need to create an __init__.pyx, you can do the ext_module importings in the __init__.py file

How to load jar dependenices in IPython Notebook

This page was inspiring me to try out spark-csv for reading .csv file in PySpark
I found a couple of posts such as this describing how to use spark-csv
But I am not able to initialize the ipython instance by including either the .jar file or package extension in the start-up that could be done through spark-shell.
That is, instead of
ipython notebook --profile=pyspark
I tried out
ipython notebook --profile=pyspark --packages com.databricks:spark-csv_2.10:1.0.3
but it is not supported.
Please advise.
You can simply pass it in the PYSPARK_SUBMIT_ARGS variable. For example:
export PACKAGES="com.databricks:spark-csv_2.11:1.3.0"
export PYSPARK_SUBMIT_ARGS="--packages ${PACKAGES} pyspark-shell"
These property can be also set dynamically in your code before SparkContext / SparkSession and corresponding JVM have been started:
packages = "com.databricks:spark-csv_2.11:1.3.0"
os.environ["PYSPARK_SUBMIT_ARGS"] = (
"--packages {0} pyspark-shell".format(packages)
)
I believe you can also add this as a variable to your spark-defaults.conf file. So something like:
spark.jars.packages com.databricks:spark-csv_2.10:1.3.0
This will load the spark-csv library into PySpark every time you launch the driver.
Obviously zero's answer is more flexible because you can add these lines to your PySpark app before you import the PySpark package:
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-csv_2.10:1.3.0 pyspark-shell'
from pyspark import SparkContext, SparkConf
This way you are only importing the packages you actually need for your script.

MXMLC compiler missing filesystem library

I've had not trouble until this point directly from MXMLC command line. While compiling Actionscript 3 code I ran into a dependency problem.
import flash.filesystem;
and I get
Error: Definition flash:filesystem could not be found
There are another or two file-related libraries such as filestream. Where can I find these standard libraries and how might I add them to my MXMLC library PATH?
What are the specific classes you are trying to use? If you want to import all of the classes in the flash.filesystem package you need a * at the end of that import statement. Otherwise you need to append the class name(s). Something like one of these:
import flash.filesystem.*;
or
import flash.filesystem.File;
The other thing that might be an issue is the values in your flex-config.XML (or air-config.xml) file that is part of the SDK. You might need to configure this to include the classes in the AIR sdk, etc.