ipython tab completion only to show imported functions from a module, not other imported modules - tabs

I like the tab completion in ipython to see which functions are available in an imported module. I am aware that imports in a module could be hidden from view in the tab completion if those imports were done with an underscore like import os as _os. Is there a way to avoid seeing imported modules if the imports were done without an underscore like import os only?
Example:
with_underscore.py:
import os as _os
def list(path):
_os.listdir(path)
without_underscore.py:
import os
def list(path):
os.listdir(path)
After importing the two modules into ipython
[1]: import with_underscore
[2]: import without_underscore
tab completion without_underscore.<tab> would yield list os, tab completion with_underscore.<tab> would yield list only, which is what I want. How could I get only list with tab completion without the underscore importing approach?

Related

ImportError: Cannot import Cython module

I have a class named neuron in Cython syntax which works perfectly fine with Jupyter inline using magic (%%cython):
cdef class neuron: pass
and I am trying to cythonize this so that I can import it on a cluster and run larger scale experiments using Jupyter on a conda environment. My setup.py file looks like this:
from setuptools import setup, Extension
from Cython.Build import cythonize
extensions = [
Extension("Neuronal_Cascades_cython_Base1", ["Neuronal_Cascades_cython_Base1.pyx"]),
]
setup(
name="Neuronal_Cascades_cython_Base",
ext_modules=cythonize(extensions),
)
Cythonize works fine and .so and .c files created fine without any errors. But when I'm importing these two modules in Jupyter notebook, I get the import error:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-3-639e2d302e82> in <module>
1 import matplotlib.pyplot as plt
2 import numpy as np
----> 3 from Neuronal_Cascades_cython_Base1 import neuron
4 import os
5 import pickle
ImportError: cannot import name 'neuron' from 'Neuronal_Cascades_cython_Base1' (/Users/bengieru/Neuronal_Cascades/Cython/Neuronal_Cascades_cython_Base1.cpython-37m-darwin.so)
Can anyone tell what I am doing wrong? I feel like it may be related with the setup.py importing dependencies but I'm not sure how to fix it.
I found my mistake. After trying a million things, I figured it has nothing to do with my setup.py or my source code. There were three issues:
setup.py needs to be in the parent directory to work properly.
an empty __init__.py file needs to be in the child directory.
Parent
|--Neuronal_Cascades_Base1
| |--neuron.pyx
| |--init.py
|--setup.py
(Can someone help me formatting this tree nicely?)
When you save a .ipynb file using 'save as' on the Jupiter notebook drop down menu as neuron.pyx creates a file that look weird and all and there are bunch of other unnecessary metadata. So, deleting everything inside neuron.pyx and simply copy-pasting the original cython source code in the neuron.pyx resolved the issue.

No module named _caffe

_caffe.so is present in the caffe/python/caffe folder
Have set the path variable as export PYTHONPATH=/home/itstudent1/caffe/python:$PYTHONPATH.
make pycaffe was also successful.
I am not understanding what else might be the cause for this error. I am able to import caffe in python.
File
"/home/itstudent1/MajorProject/densecap-master/lib/tools/../../python/caffe/pycaffe.py",
line 13, in
from ._caffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, \ ImportError: No module named _caffe
It seems like you have two versions of caffe:
one in /home/itstudent1/caffe and another in /home/itstudent1/MajorProject/densecap-master.
While the first version is built and compiled, the later is not and your import looks for _caffe.so in the later.

how to prepare cython submodules

I have three questions, but related, and I not getting how can I split them well. I've found many information about those issues, like submodule extension, hierarchy, about an empty __init__.py file, and how to cythonize multiple pyx files. But when I try them together I cannot make them work.
I've prepared an small repo thinking in place together code samples of issues solved. I've check even the code of some of the projects listed that uses cython, but still not getting how to does three things at the same time.
Empty __init__.py file:
In a project (where all the files are pyx (and pxd ifneedbe)), with a __init__.pyx that includes all of them, when there is a __init__.py file, then import it doesn't load the ".so" but the empty init.
cythonize multiple files:
When instead of prepare a __init__.py that includes all the elements of a module. Like:
cythonize(['factory.pyx', 'version.pyx'])
the resulting ".so" import raises an exception:
>>> import barfoo
(...)
ImportError: dynamic module does not define init function (PyInit_barfoo)
It would be related with the previous question if it is necessary to write something in __init__.py.
Submodule:
In fact, this is the main question. The singleton.pyx would part of a submodule, lets say utils to be used from other elements in the module.
For the sample there is a submodule (simply called subm) added in the setup.py as another extension. I've placed earlier than the main one (I don't know if this really does any difference, I didn't see it).
>>> import barfoo
>>> import barfoo.subm
(...)
ImportError: No module named 'barfoo.subm'
Separately, those recipes work, but together I cannot. The "__init__.py" seems to be necessary when there is a mix of "py" and "pyx" files. The examples explain how to cythonize with multiple files, but don't include the last key point for the import. And the submodules doesn't complete about how they can be imported from one place or another (import submodules when import the base one, or optional imports when they are specified).
Thanks to the comments from oz1 and DavidW, I've got the solution. Yes, those three things come together.
Very important is the order when import in the setup.py. Even the PEP8 doesn't say that the imports should be alphabetically sorted, there are other guide lines (like reddit's) that does and I usually follow:
When import first cythonize and then setup, will cause that when cythonize(find_pyx()) is called, the result will be a list of distutils.extension.Extension objects.
from setuptools import setup, find_packages
from Cython.Build import cythonize
setuptools must be imported before cython and then the result of cythionize() will be a list of setuptools.extension.Extension objects that can be passed to the setup() call.
Important to understand the meanings of the __init__'s:
All the __init__.pyx files with includes has been removed and each .pxy file produces its own .so binary.
The main module and the submodules will exist as long as their directories contain the __init__.py file like happen with a pure python code.
In the example I've linked, the file barfoo/__init__.py is not empty because I want that import barfoo provides access to elements like version() or Factory(). Then, this __init__.py is who imports them like a normal pure python.
For the submodule:
Similar for the submodule and its own __init__.py file. In this example the import barfoo will do a from .factory import Factory, who will call from barfoo.subm import Bar, then the subm will be available.
But if the submodule is not imported in this secondary way, the user will have access to it with calls like import barfoo.subm.
Last night I saw your question, and made a simple example according to the wiki. But that question was deleted quickly.
Here is the example: https://github.com/justou/cython_package_demo
Make sure the settings of C compiler and python environment is correct, compile the pyx files by run:
python setup.py build_ext --inplace
Usage is the same as python package:
from dvedit.filters import flip, inverse, reverse
flip.show() # print: In flip call Core function
inverse.show() # print: In inverse call Core function
reverse.show() # print: In reverse call Core function
BTW, there is no need to create an __init__.pyx, you can do the ext_module importings in the __init__.py file

How to load jar dependenices in IPython Notebook

This page was inspiring me to try out spark-csv for reading .csv file in PySpark
I found a couple of posts such as this describing how to use spark-csv
But I am not able to initialize the ipython instance by including either the .jar file or package extension in the start-up that could be done through spark-shell.
That is, instead of
ipython notebook --profile=pyspark
I tried out
ipython notebook --profile=pyspark --packages com.databricks:spark-csv_2.10:1.0.3
but it is not supported.
Please advise.
You can simply pass it in the PYSPARK_SUBMIT_ARGS variable. For example:
export PACKAGES="com.databricks:spark-csv_2.11:1.3.0"
export PYSPARK_SUBMIT_ARGS="--packages ${PACKAGES} pyspark-shell"
These property can be also set dynamically in your code before SparkContext / SparkSession and corresponding JVM have been started:
packages = "com.databricks:spark-csv_2.11:1.3.0"
os.environ["PYSPARK_SUBMIT_ARGS"] = (
"--packages {0} pyspark-shell".format(packages)
)
I believe you can also add this as a variable to your spark-defaults.conf file. So something like:
spark.jars.packages com.databricks:spark-csv_2.10:1.3.0
This will load the spark-csv library into PySpark every time you launch the driver.
Obviously zero's answer is more flexible because you can add these lines to your PySpark app before you import the PySpark package:
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-csv_2.10:1.3.0 pyspark-shell'
from pyspark import SparkContext, SparkConf
This way you are only importing the packages you actually need for your script.

Plone/SQLAlchemy(?) - How can I import a python package (i.e. sqlalchemy) in a module in a subpackage?

I am trying to import sqlalchemy in a module in a subpackage.
Here is my folder layout
PloneInstance
my.package
my
package
subpackage
In the buildout.cfg file of the root folder, I add "sqlalchemy" to the eggs.
In my.package, in configure.zcml, I add:
In the subpackage, I have a blank __init__.py file, a configure.zcml file, and a file called mymodule.py
In mymodule.py I have a line for importing sqlalchemy
import sqlalchemy
Unfortunately, I am getting an error when I try to run an instance:
ImportError: No module named sqlalchemy
I'm assuming I am missing a step. How do I properly import python packages?
Thank you in advance. I apologize if my terminology is off.
Edit:
The module in question I am importing from turned out to be zope.sqlalchemy.
I accidentally overlooked this because prior to moving files to a subpackage, the import statement for zope.sqlalchemy was working without adding zope.sqlalchemy to the eggs section of the buildout.
Look in the setup.py file at the top directory of your package. You'll find a section like:
install_requires=['setuptools',
# -*- Extra requirements: -*-
],
In place of the "Extra requirements' comment, put a comma-separated list of strings specifying your package's requirements. You may even specify versions.
Do not add standard Plone packages to the list. They're taken for granted.
Re-run buildout after specifying your requirements. The result is that the new install requires will be added to your Python environment when you start Plone.