MySQLConnection' object has no attribute 'my_cursor' - mysql

Im new to mysql. installed gitbash, sublime text and mysql. pip3 working. able to import mysql.connector.
Im trying to create a database called mydb
Problem is, Im getting attribute error: 'MySQLConnection' object has no attribute 'my_cursor'
python database.py
Traceback (most recent call last):
File "database.py", line 10, in
my_cursor = mydb.my_cursor()
AttributeError: 'MySQLConnection' object has no attribute 'my_cursor''

I don't know which import you are using and did you used some at all but you should try to use this:
import MySQLdb
Also i think you need to use cursor() instead of my_cursor(). Question is not explained well, i don't have source code so i cannot be precise.

Related

Transforming shapefiles to dataframes with shapefile_to_dataframe() helper function - fiona related error

I am trying to use the Palantir Foundry helper function shapefile_to_dataframe() in order to ingest shapefiles for later usage in geolocation features.
I have manually imported the shapefiles (.shp, .shx & .dbf) in a single dataset (no access issues through the filesystem API).
As per documentation, I have imported geospatial-tools and the GEOSPARK profiles + included dependencies in the transforms-python build.gradle.
Here is my transform code, which is mostly extracted from the documentation:
from transforms.api import transform, Input, Output, configure
from geospatial_tools import geospatial
from geospatial_tools.parsers import shapefile_to_dataframe
#geospatial()
#transform(
raw = Input("ri.foundry.main.dataset.0d984138-23da-4bcf-ad86-39686a14ef21"),
output = Output("/Indhu/InDhu/Vincent/geo_energy/datasets/extract_coord/raw_df")
)
def compute(raw, output):
return output.write_dataframe(shapefile_to_dataframe(raw))
Code assist then become extremely slow to load, and then I am finally getting following error:
AttributeError: partially initialized module 'fiona' has no attribute '_loading' (most likely due to a circular import)
Traceback (most recent call last):
File "/myproject/datasets/shp_to_df.py", line 3, in <module>
from geospatial_tools.parsers import shapefile_to_dataframe
File "/scratch/standalone/3a553998-623b-48f5-9c3f-03de7e64f328/code-assist/contents/transforms-python/build/conda/env/lib/python3.8/site-packages/geospatial_tools/parsers.py", line 11, in <module>
from fiona.drvsupport import supported_drivers
File "/scratch/standalone/3a553998-623b-48f5-9c3f-03de7e64f328/code-assist/contents/transforms-python/build/conda/env/lib/python3.8/site-packages/fiona/__init__.py", line 85, in <module>
with fiona._loading.add_gdal_dll_directories():
AttributeError: partially initialized module 'fiona' has no attribute '_loading' (most likely due to a circular import)
Thanks a lot for your help,
Vincent
I was able to reproduce this error and it seems like it happens only in previews - running the full build seems to be working fine. The simplest way to get around it is to move the import inside the function:
from transforms.api import transform, Input, Output, configure
from geospatial_tools import geospatial
#geospatial()
#transform(
raw = Input("ri.foundry.main.dataset.0d984138-23da-4bcf-ad86-39686a14ef21"),
output = Output("/Indhu/InDhu/Vincent/geo_energy/datasets/extract_coord/raw_df")
)
def compute(raw, output):
from geospatial_tools.parsers import shapefile_to_dataframe
return output.write_dataframe(shapefile_to_dataframe(raw))
However, at the moment, the function shapefile_to_dataframe isn't going to work in the Preview anyway because the full transforms.api.FileSystem API isn't implemented - specifically, the functions ls doesn't implement the parameter glob which the full transforms API does.

TypeError("can't pickle re.Match objects") error when pickling using dill / pickle

I can't seem to figure out a way to pickle this, can anyone help?
It's because of the way reduce function is written for re.match.
Code:
import re
x = re.match('abcd', 'abcd')
print(type(x))
print(x.__reduce_ex__(3))
Output:
<class 're.Match'>
Traceback (most recent call last):
File "an.py", line 4, in <module>
print(x.__reduce_ex__(3))
TypeError: can't pickle re.Match objects
My exact issue is that I am trying to pickle an object of a lex / yacc parser implementation class after submitting a string to it to parse.
If I try to pickle the class object without parsing any string via it, it is able to pickle. Problem arises only after I parse a string using it and then try to pickle the class object.
Match objects does not have a __getstate__ and __setstate__ thus cannot be pickled, the entire iterator could not be pickled.
More about this subject can be found here:
https://docs.python.org/3/library/pickle.html#pickle-picklable
here is a further explanation on the desired objects:
https://docs.python.org/3/library/re.html#match-objects
An alternative solution is to implement __getstate__ and __setstate__ to help the pickling process, this will require you to create a custom class and implement this function, which seem to overcomplicated for this situation
Hope that helped

Loading XGBoost Model: ModuleNotFoundError: No module named 'sklearn.preprocessing._label'

I'm having issues loading a pretrained xgboost model using the following code:
xgb_model = pickle.load(open('churnfinalunscaled.pickle.dat', 'rb'))
And when I do that, I get the following error:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-29-31e7f426e19e> in <module>()
----> 1 xgb_model = pickle.load(open('churnfinalunscaled.pickle.dat', 'rb'))
ModuleNotFoundError: No module named 'sklearn.preprocessing._label'
I haven't seen anything online so any help would be much appreciated.
I was able to solve my issue. Simply update scikit-learn from 0.21.3 to 0.22.0 seems to solve the issue. Along the way I have to update my pandas version to 0.25.2 as well.
The cue is provided in this link: https://www.gitmemory.com/vruusmann, where it states:
During Scikit-Learn version upgrade from 0.21.X to 0.22.X many modules were renamed (typically, by prepending an underscore character to the module name). For example, sklearn.preprocessing.label.LabelEncoder became sklearn.preprocessing._label.LabelEncoder.

import issue of "serialization.json.JSON" class in actionscript:

I'm working on a project using actionscript and Flex. For some reason I have a problem importing the com.adobe.serialization.json.JSON class.
When I'm working only with the FlexSDK files and try to use it I'm getting
the following error:
Error:(142, 70) [..]: Error code: 1120: Access of undefined property
JSON.
And of course IntelliJ marks this file and the import in red.
On the other hand when I import the corelib.swc that includes this file I get the following error:
Error:[..]: Can not resolve a multiname reference unambiguously. JSON
(from /Volumes/backup/FlexSDK/frameworks/libs/air/airglobal.swc(JSON,
Walker)) and com.adobe.serialization.json:JSON (from
/Volumes/backup/.../libs/corelib.swc(com.adobe.serialization.json:JSON))
are available.
What is going on here? How can I solve this?
JSON is a top level class available in all the scopes since FP11. Trying to import any class with name JSON will result in an error. If (for some reason) you really do not want to use the already available JSON class and instead import a custom one you'll have to rename it.
Using Intellij, the best you can do is use the JSON class from the current SDK that you have, it has the methods parse() and stringify(). those two methods do the same as the corelib methods for the json.
In case you wanted to use the com.adobe.serialization.json.JSON it will enter in conflict with the one declared in the SDK.
Hope the information is useful

nltk.word_tokenize() giving AttributeError: 'module' object has no attribute 'defaultdict'

I am new to nltk.
I was trying some basics.
import nltk
nltk.word_tokenize("Tokenize me")
gives me this following error
Traceback (most recent call last):
File "<pyshell#27>", line 1, in <module>
nltk.word_tokenize("hi im no onee")
File "C:\Python27\lib\site-packages\nltk\tokenize\__init__.py", line 101, in word_tokenize
return [token for sent in sent_tokenize(text, language)
File "C:\Python27\lib\site-packages\nltk\tokenize\__init__.py", line 85, in sent_tokenize
tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
File "C:\Python27\lib\site-packages\nltk\data.py", line 786, in load
resource_val = pickle.load(opened_resource)
AttributeError: 'module' object has no attribute 'defaultdict'
Please someone help. Please tell me how to fix this error.
I just checked it on my system.
Fix:
>> import nltk
>> nltk.download('all')
Then everything worked fine.
>> import nltk
>> nltk.word_tokenize("Tokenize me")
['Tokenize', 'me']
I had the same error, and then I realized that I had saved the file as tokenize.py that's why I was getting this error when I changed the name of my python file with another name it worked fine. Hope this is helpful.
I found out later that I was using a backdated nltk data. The programs started to work fine as soon as I updated the data.
you need to update your nltk version. In case you are using anaconda, then do the following in terminal:
>> conda update nltk
It will update nltk. Then restart ipython and it should work!