I have a list .I am trying to json dump and load it and get specific data out of it but its not working.
x=[
AttributeDict({
'address': '0xf239F8424AffCbf9CC08Bd0110F0Df011Bcd2e68',
'logIndex': 0,
'args': AttributeDict({
'_value': 63
}),
'transactionHash': HexBytes('0x96d06e0f112247fd584cfe9fbdf726d172ec0703bad3604c1182e0abcb67a45a'),
'event': 'Energy',
'blockHash': HexBytes('0x3ee6e9f4d682d9a99a94828e9ad7eb7e009e464aed980cd6c3055f62703599fa'),
'blockNumber': 1327084,
'transactionIndex': 0
})
]
This is the reposnse above.
I need to get the "_value" out of it so
i first did the dump..
y = json.dumps(x)
and then loads
z = json.loads(y)
but i am not getting any data by putting e.g.
z['AttributeDict']
how can i get that out of it?? thanks
So the answer is,there is a module in web3.py called
web3.datastructures so just doing by that code below you can get the values out of it
import web3.datastructures as wd
res=wd.AttributeDict(x[0])
print(res['args']['_value'])
Related
i'm working on something based of this blogpost:
https://python-visualization.github.io/folium/quickstart.html#Getting-Started
specifically part 13 - using Cloropleth maps:
the piece of code they use is the following:
import pandas as pd
url = (
"https://raw.githubusercontent.com/python-visualization/folium/master/examples/data"
)
state_geo = f"{url}/us-states.json"
state_unemployment = f"{url}/US_Unemployment_Oct2012.csv"
state_data = pd.read_csv(state_unemployment)
m = folium.Map(location=[48, -102], zoom_start=3)
folium.Choropleth(
geo_data=state_geo,
name="choropleth",
data=state_data,
columns=["State", "Unemployment"],
key_on="feature.id",
fill_color="YlGn",
fill_opacity=0.7,
line_opacity=0.2,
legend_name="Unemployment Rate (%)",
).add_to(m)
folium.LayerControl().add_to(m)
m
if I use this I get the requested map.
Now I try to do this with my own data; i work in databricks
so I have a JSON with the GEOJSON data (source_file1) and a CSV file (source_file2) with the data that needs to be "plotted" on the map.
source_file1 = "dbfs:/mnt/sandbox/MAARTEN/TOPO/Belgie_GEOJSON.JSON"
state_geo = spark.read.json(source_file1,multiLine=True)
source_file2 = "dbfs:/mnt/sandbox/MAARTEN/TOPO/DATASVZ.csv"
df_2 = spark.read.format("CSV").option("inferSchema", "true").option("header", "true").option("delimiter",";").load(source_file2)
state_data = df_2.toPandas()
when adjusting the code below:
m = folium.Map(location=[48, -102], zoom_start=3)
folium.Choropleth(
geo_data=state_geo,
name="choropleth",
data=state_data,
columns=["State", "Unemployment"],
key_on="feature.properties.name_nl",
fill_color="YlGn",
fill_opacity=0.7,
line_opacity=0.2,
legend_name="% Marktaandeel CC",
).add_to(m)
folium.LayerControl().add_to(m)
m
So i upload the geo_data parameter as a Sparkdatafram, I get the following error:
ValueError: Cannot render objects with any missing geometries: DataFrame[features: array<struct<geometry:struct<coordinates:array<array<array<string>>>,type:string>,properties:struct<arr_fr:string,arr_nis:bigint,arr_nl:string,fill:string,fill-opacity:double,name_fr:string,name_nl:string,nis:bigint,population:bigint,prov_fr:string,prov_nis:bigint,prov_nl:string,reg_fr:string,reg_nis:string,reg_nl:string,stroke:string,stroke-opacity:bigint,stroke-width:bigint>,type:string>>, type: string]```
I think it is because transforming the data from the "blob format" in the Azure datalake to the sparkdataframe, something goes wrong with the format. I tested this in a jupyter notebook from my desktop, data straight from file to folium and it all works.
If i load it directly from the source, like the example does with their webpage, so i adjust the 'geo_data' parameter for the folium function:
m = folium.Map(location=[48, -102], zoom_start=3)
folium.Choropleth(
geo_data=source_file1, #this gets adjusted directly to data lake
name="choropleth",
data=state_data,
columns=["State", "Unemployment"],
key_on="feature.properties.name_nl",
fill_color="YlGn",
fill_opacity=0.7,
line_opacity=0.2,
legend_name="% Marktaandeel CC",
).add_to(m)
folium.LayerControl().add_to(m)
m
I get the error
Use "/dbfs", not "dbfs:": The function expects a local file path. The error is caused by passing a path prefixed with "dbfs:".
So I started wondering what is the difference between my JSON file and the one of the blogpost. And the only thing i can imagine is that the Azure datalake doesn't store my json as a json but as a block blob file and for some reason i am not converting it properly so that folium can read it.
Azure blob storage (data lake)
So can someone with folium knowledge let me know if
A. it is not possible to load the geo_data directly from a datalake ?
B. in what format I need to upload the data ?
any thoughts on this would be helpfull!!!
thanks in advance!
Solved this issue, just had to replace "dbfs:" with "/dbfs". I tried it a lot of times but used "/dbfs:" and got another error.
can't believe i'm this stupid :-)
I am trying to scrape the pokemon API and create a dataset for all pokemon. So I have written a function which looks like this:
import requests
import json
import pandas as pd
def poke_scrape(x, y):
'''
A function that takes in a range of pokemon (based on pokedex ID) and returns
a pandas dataframe with information related to the pokemon using the Poke API
'''
#GATERING THE DATA FROM API
url = 'https://pokeapi.co/api/v2/pokemon/'
ids = range(x, (y+1))
pkmn = []
for id_ in ids:
url = 'https://pokeapi.co/api/v2/pokemon/' + str(id_)
pages = requests.get(url).json()
# content = json.dumps(pages, indent = 4, sort_keys=True)
if 'error' not in pages:
pkmn.append([pages['id'], pages['name'], pages['abilities'], pages['stats'], pages['types']])
#MAKING A DATAFRAME FROM GATHERED API DATA
cols = ['id', 'name', 'abilities', 'stats', 'types']
df = pd.DataFrame(pkmn, columns=cols)
The code works fine for most pokemon. However, when I am trying to run poke_scrape(229, 229) (so trying to load ONLY the 229th pokemon), it gives me the JSONDecodeError. It looks like this:
So far I have tried using json.loads() instead but that has not solved the issue. What is even more perplexing is that specific pokemon has loaded before and the same issue was with another ID - otherwise I could just manually enter the stats for the specific pokemon that is unable to load into my dataframe. Any help is appreciated!
Because of the way the PokeAPI works, some links to the JSON data for each pokemon only load when the links end with a '/' (such as https://pokeapi.co/api/v2/pokemon/229/ vs https://pokeapi.co/api/v2/pokemon/229 - first link will work and the second will return not found). However, others will respond with a response error because of the added '/' so fixed the issue with a few if statements right after the for loop in the beginning of the function
I have a .MF4 file and want to export a list of Channels to a CSV file.
Following is the function i used:
list.export(fmt='csv', filename='foo.csv',single_time_base=True, overwrite = True)
empty_channels are skipped by default see documentation
in the CSV i dont get any values for a specific Channel because there are multiple Signals with the same name. So he only find the empty channel and skips this.
How is it possible to get the Signal and add it to the CSV?
Using the development branch code I get the expected result, since an unique name is generate in case of multiple occurrence. So you should see the following columns in the .csv output: "Sig", "Sig_0", "Sig_1" ...
from asammdf import MDF, Signal
import numpy as np
mdf = MDF()
mdf.append(Signal(np.arange(10), np.arange(10), name='Sig'))
mdf.append(Signal([], [], name='Sig'))
mdf.append(Signal(np.arange(0, 10, 0.1), np.arange(0, 10, 0.1), name='Sig'))
mdf.export(fmt='csv', filename='foo.csv', single_time_base=True, overwrite = True)
I'm trying to scrape image URL's from Amazon products, for example, this link.
In the page source code, there is a section which contains all the urls for images of different sizes (large, medium, hirez, etc). I can get that part of the script by doing, with scrapy,
imagesString = (response.xpath('//script[contains(., "ImageBlockATF")]/text()').extract_first())
Which gives me a string that looks like this,
P.when('A').register("ImageBlockATF", function(A){
var data = {
'colorImages': { 'initial': [{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/81FED1p-sTL._SL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/31HoKqtljqL._SS40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/31HoKqtljqL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/81FED1p-sTL._SX355_.jpg":[308,355],"https://images-na.ssl-images-amazon.com/images/I/81FED1p-sTL._SX450_.jpg":[390,450],"https://images-na.ssl-images-amazon.com/images/I/81FED1p-sTL._SX425_.jpg":[369,425],"https://images-na.ssl-images-amazon.com/images/I/81FED1p-sTL._SX466_.jpg":[404,466],"https://images-na.ssl-images-amazon.com/images/I/81FED1p-sTL._SX522_.jpg":[453,522],"https://images-na.ssl-images-amazon.com/images/I/81FED1p-sTL._SX569_.jpg":[494,569],"https://images-na.ssl-images-amazon.com/images/I/81FED1p-sTL._SX679_.jpg":[589,679]},"variant":"MAIN","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/81e8905DlhL._SL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/31Y%2B8oE5DtL._SS40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/31Y%2B8oE5DtL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/81e8905DlhL._SX355_.jpg":[308,355],"https://images-na.ssl-images-amazon.com/images/I/81e8905DlhL._SX450_.jpg":[390,450],"https://images-na.ssl-images-amazon.com/images/I/81e8905DlhL._SX425_.jpg":[369,425],"https://images-na.ssl-images-amazon.com/images/I/81e8905DlhL._SX466_.jpg":[404,466],"https://images-na.ssl-images-amazon.com/images/I/81e8905DlhL._SX522_.jpg":[453,522],"https://images-na.ssl-images-amazon.com/images/I/81e8905DlhL._SX569_.jpg":[494,569],"https://images-na.ssl-images-amazon.com/images/I/81e8905DlhL._SX679_.jpg":[589,679]},"variant":"PT01","lowRes":null},{"hiRes":null,"thumb":"https://images-na.ssl-images-amazon.com/images/I/51rORrvh0hL._SS40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/51rORrvh0hL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/51rORrvh0hL._SX355_.jpg":[236,355],"https://images-na.ssl-images-amazon.com/images/I/51rORrvh0hL._SX450_.jpg":[300,450],"https://images-na.ssl-images-amazon.com/images/I/51rORrvh0hL._SX425_.jpg":[283,425],"https://images-na.ssl-images-amazon.com/images/I/51rORrvh0hL._SX466_.jpg":[310,466],"https://images-na.ssl-images-amazon.com/images/I/51rORrvh0hL.jpg":[333,500]},"variant":"PT02","lowRes":null},{"hiRes":null,"thumb":"https://images-na.ssl-images-amazon.com/images/I/41L2OU5rPyL._SS40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/41L2OU5rPyL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/41L2OU5rPyL._SX355_.jpg":[236,355],"https://images-na.ssl-images-amazon.com/images/I/41L2OU5rPyL._SX450_.jpg":[300,450],"https://images-na.ssl-images-amazon.com/images/I/41L2OU5rPyL._SX425_.jpg":[283,425],"https://images-na.ssl-images-amazon.com/images/I/41L2OU5rPyL._SX466_.jpg":[310,466],"https://images-na.ssl-images-amazon.com/images/I/41L2OU5rPyL.jpg":[333,500]},"variant":"PT03","lowRes":null},{"hiRes":null,"thumb":"https://images-na.ssl-images-amazon.com/images/I/51%2BsCYjx6OL._SS40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/51%2BsCYjx6OL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/51%2BsCYjx6OL._SX355_.jpg":[236,355],"https://images-na.ssl-images-amazon.com/images/I/51%2BsCYjx6OL._SX450_.jpg":[300,450],"https://images-na.ssl-images-amazon.com/images/I/51%2BsCYjx6OL._SX425_.jpg":[283,425],"https://images-na.ssl-images-amazon.com/images/I/51%2BsCYjx6OL._SX466_.jpg":[310,466],"https://images-na.ssl-images-amazon.com/images/I/51%2BsCYjx6OL.jpg":[333,500]},"variant":"PT04","lowRes":null}]},
'colorToAsin': {'initial': {}},
'holderRatio': 1.0,
'holderMaxHeight': 700,
'heroImage': {'initial': []},
'heroVideo': {'initial': []},
'spin360ColorData': {'initial': {}},
'spin360ColorEnabled': {'initial': 0},
'spin360ConfigEnabled': false,
'spin360LazyLoadEnabled': false,
'playVideoInImmersiveView':'false',
'tabbedImmersiveViewTreatment':'T2',
'totalVideoCount':'0',
'videoIngressATFSlateThumbURL':'',
'mediaTypeCount':'0',
'atfEnhancedHoverOverlay' : true,
'winningAsin': 'B00XLSS79Y',
'weblabs' : {},
'aibExp3Layout' : 1,
'aibRuleName' : 'frank-powered',
'acEnabled' : false
};
A.trigger('P.AboveTheFold'); // trigger ATF event.
return data;
});
My goal is to get into a Json dictionary the data inside colorImages, so then I can easily get each URL.
I tried doing something like this:
m = re.search(r'^var data = ({.*};)', imagesString , re.S | re.M)
data = m.group()
jsonObj = json.loads(data[:-1].replace("'", '"'))
But it seems that imagesString does not work well with re.search, I keep getting errors regarding imagesString not being a string when it actually is.
I got similar data from an amazon page by using re.findall, something like this (script is a chunk of text i got from the page).
variationValues = re.findall(r'variationValues\" : ({.*?})', ' '.join(script))[0]
and then
variationValuesDict = json.loads(variationValues)
But my knowledge of regular expressions is not that great.
From the string I pasted above, I erased the start and end so only the data remained, so I was left with this:
https://jsoneditoronline.org/?id=9ea92643044f4ac88bcc3e76d98425fc
I can't figure out how to get colorImages with re.findall() (or the data in the json editor) so I can then load it into Json and use it like a dictionary, any ideas on how to achieve this?
You just need to initially convert the var data to the correct markup json. It is easy ))) Just replace all chars ' to " and delete SPACES. And you will get json object:
(It's your right json)
Sorry all!! I know there are several (many) post regarding JSON and google api. I read them all and tried fixing the problem for several days. I am going to explode if I have to clear my browsing history, again, to start with a "fresh" mind...
I could really use some help. Any and all help would be greatly appreciated.
Console error prints : "TypeError: b[le] is not a function"
Here is my function:
function timeline() {
$.post('NewQuery.php', function ( data ) {
alert (data);
var data = new google.visualization.DataTable(JSON.parse(data));
var view = new google.visualization.DataView(data)
view.setColumns ([0,2])
chart = new google.visualization.LineChart(document.getElementById('chart2'));
chart.draw(view);
});
}
Here is the JSON object that is 'alerted'- I formatted it to make it more legible.
{
"cols":[
{"label":"Dia","type":"date"},
{"label":"Fruta","type":"string"},
{"label":"CostoTotal","type":"number"}
],
"rows":[
{"c":[{"v":"Dia(2014, 3, 9)"},{"v":"Kiwis"},{"v":2628}]},
{"c":[{"v":"Dia(2014, 3, 18)"},{"v":"Cerezos"},{"v":3200841}]},
{"c":[{"v":"Dia(2014, 3, 23)"},{"v":"Peras"},{"v":113696}]},
{"c":[{"v":"Dia(2014, 3, 1)"},{"v":"Cerezos"},{"v":4294967295}]}
]
}
The problem is with your date strings, they should use the word "Date", not "Dia". This:
"Dia(2014, 3, 9)"
should be this:
"Date(2014, 3, 9)"