Is there a straightforward way to load a .csv file into Simplegeo Storage? I don't have great coding skills and I'm trying to get things set up so I can ask a freelancer to create some maps for my app. If someone has existing code to do this I can probably figure out how to make it work for my situation.
I just skimmed over the api. Here's a basic example in python
Assumed csv format:
layer, id, lat, lon
python
from simplegeo.models import Record, Client
lines = open('file.csv').split('\n')
client = Client('your-oauth-token', 'your-oauth-secret')
for line in lines:
parts = line.split(',')
if len(parts) == 4:
layer = parts[0].strip()
id = parts[1].strip()
lat = float(parts[2].strip())
lon = float(parts[3].strip())
r = Record(layer, id, lat, lon)
client.storage.add_record(r)
After a bit more digging, I found a python example on their site for this exact purpose
https://simplegeo.com/docs/tutorials/general-hackery#how-import-csv-file-simplegeo
import csv
import simplegeo
OAUTH_TOKEN = '[insert_oauth_token_here]'
OAUTH_SECRET = '[insert_oauth_secret_here]'
CSV_FILE = '[insert_csv_file_here]'
LAYER = '[insert_layer_name_here]'
client = simplegeo.Client(OAUTH_TOKEN, OAUTH_SECRET)
def insert(data):
layer = LAYER
id=data.pop("id")
lat=data.pop("latitude")
lon=data.pop("longitude")
# Grab more columns if you wish
record = simplegeo.Record(layer,id,lat,lon,**data)
client.add_record(record)
r = csv.DictReader(open(CSV_FILE, mode='U'))
for l in r:
insert(l)
Related
I have to filter the data, therefore I need to create new CSV file based on the filters.
I am having a trouble doing it, cause the new file does not change after I run the code
Below is my code. Where I have two csv file. Stage_3_try.csv file is the one I am trying to add new data. I used enumerate to get the index value of the specific value I searched in previous csv file.
# Projec
import csv
from csv import writer
A = np.array([ 316143.8829, 6188926.04])
B = np.array([ 314288.7418, 6190277.519])
for i in range(0,len(east_3)):
P = []
P.append(east_3[i])
P.append( north_3[i])
P = np.asarray(P)
projected = point_on_line(P) #a code to do the projection
x_values = [A[0], B[0]]
y_values = [A[1], B[1]]
plt.plot(x_values, y_values, 'b-')
if projected[0]>315745.75 and projected[1]>6188289:
with open('Stage_3_try.csv', 'a') as f_out:
writer = csv.writer(f_out)
for num, row in enumerate(stage_3['UTM North NAD83']):
if row == P[1]:
writer.writerow(stage_3.loc[[num][0]])
print(type(stage_3.loc[[num][0]]))
plt.plot(projected[0], projected[1], 'rx')
f_out.close()
else:
pass
PS: I updated the code, since the previous one worked, but when I added it to the loop, it stopped working
I'm trying to plot the boundaries of the localities of Brussels. The system of coordinates of my json file has to be converted to a longlat system to display the Polygons on Folium maps. The issue I get is that my coordinates are projected into the Pacific ocean. I guess it is probably due to the fact that the parameters I set are not the good ones. Please find below my code:
import json
import pyproj
import folium
# Load JSON file
with open("districts.json", "r") as f:
data = json.load(f)
# Create a transformation object
in_proj = pyproj.Proj(proj='utm',zone=31,datum='WGS84')
out_proj = pyproj.Proj(proj='longlat',datum='WGS84')
# Transform the coordinates
features = data["features"]
for feature in features:
coords = feature["geometry"]["coordinates"][0]
coords = [pyproj.transform(in_proj, out_proj, coord[0], coord[1]) for coord in coords]
feature["geometry"]["coordinates"] = [coords]
# Plot the polyggon on a map
m = folium.Map()
folium.GeoJson(data).add_to(m)
m
This corresponds to how my json file is structured:
{"type":"FeatureCollection","features":[{"geometry":{"type":"Polygon","coordinates":[[[152914.748398394,173305.19242333],[152947.4133984,173326.530423339],...,[152961.983398418,173225.325423267],[152914.748398394,173305.19242333]]]},...
(https://i.stack.imgur.com/SuU4Q.png)
(https://i.stack.imgur.com/oIKJN.png)
Does anyone has an idea how to solve this? How could I find the right parameters?
I tried different zones but I would rather know of to find the right zone number and understand how it works.
I have just came to an article called The 500 Greatest Songs of All Time and thought "oh that's cool I bet they also made a Spotify/Apple music list that I can follow". Well...they don't.
So in a nutshell, I wonder if it's possible to 1) scrap the website to extract the songs and 2) then do some kind of bulk upload to Spotify to create the list.
Songs' titles and authors are structured like this in the website:
Website screenshot. I have already tried to scrap the web with the importxml() formula in google sheets but with no success.
I understand the scrapping part is easier than the other and, as I am new to programming, I would be happy to manage to partially achieve this goal. I am sure this task can be achieved easily on python.
I feel like explaining everything would go beyond the scope here, so I tried to comment the code well enough.
1. Scrape the songs
I used python3 and selenium, their website doesn't block that.
Be sure to adjust your chromedriver path, and the output path of the .txt file at the bottom if necessary. Once it's done and you have your .txt file you can close it.
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
s = Service(r'/Users/main/Desktop/chromedriver')
driver = webdriver.Chrome(service=s)
# just setting some vars, I used Xpath because I know that
top_500 = 'https://www.rollingstone.com/music/music-lists/best-songs-of-all-time-1224767/'
cookie_button_xpath = "// button [#id = 'onetrust-accept-btn-handler']"
div_containing_links_xpath = "// div [#id = 'pmc-gallery-list-nav-bar-render'] // child :: a"
song_names_xpath = "// article [#class = 'c-gallery-vertical-album'] / child :: h2"
links = []
songs = []
driver.get(top_500)
# accept cookies, give time to load
time.sleep(3)
cookie_btn = driver.find_element(By.XPATH, cookie_button_xpath)
cookie_btn.click()
time.sleep(1)
# extracting all the links since there are only 50 songs per page
links_to_next_pages = driver.find_elements(By.XPATH, div_containing_links_xpath)
for element in links_to_next_pages:
l = element.get_attribute('href')
links.append(l)
# extracting the songs, then going to next page and so on until we hit 500
counter = 1 # were starting with 1 here since links[0] is the current page we are already on
while True:
list = driver.find_elements(By.XPATH, song_names_xpath)
for element in list:
s = element.text
songs.append(s)
if len(songs) == 500:
break
driver.get(links[counter])
counter += 1
time.sleep(2)
# verify that there are no duplicates, if there were, something would be off
if len(songs) != len( set(songs) ):
print('you f***** up')
else:
print('seems fine')
with open('/Users/main/Desktop/output_songs.txt', 'w') as file:
file.writelines(line + '\n' for line in songs)
2. Prepare Spotify
Go to the Spotify Developer Dashboard and create an
account (use your Spotify acc).
Then create an app, call it whatever you want.
On your app click settings and whitelist http://localhost:8888/callback
On your app click "users and access" and add your Spotify account
Leave the tab open, we'll come back to it
3. Prepare Your Environment
You need Node.js so make sure that is installed on your machine
Download this from Spotifys GitHub
Unzip it, cd into the folder and run npm install
Go into the authorization_code folder and open app.js in a editor
Find var scope and append ' playlist-modify-public' to the string, this is so that your app can access you Spotify playlists, see here
Now go back to the app in your Spotify Developer Dashboard we'll need to copy the Client ID and the Client Secret into the var client_id and var client_secret respectively (in the app.js file). var redirect_uri will be
http://localhost:8888/callback - don't forget to save your changes.
4. Run the Spotify side of things
cd into the authorization_code folder and run app.js with node app.js (this is basically a server running on your PC)
Now if that works leave it running and go to http://localhost:8888, authorise your Spotify account there
There copy the full token, including the overflow, use inspect element to get it
Adjust the user_id and auth variables as well as the path to the output_songs.txt (at with open) in the following python script and run that, songs which are not found will be printed out at the end, give it a search with Google. They are usually on Spotify as well but Google seem to have the better search algorithm (surprised Pikachu face).
import requests
import re
import json
# this is NOT you display name, it's your user name!!
user_id = 'YOUR_USERNAME'
# paste your auth token from spotify; it can time out then you have to get a new one, so dont panic if you get a bunch of responses in the 400s after some time
auth = {"Authorization": "Bearer YOUR_AUTH_KEY_FROM_LOCALHOST"}
playlist = []
err_log = []
base_url = 'https://api.spotify.com/v1'
search_method = '/search'
with open('/Users/main/Desktop/output_songs.txt', 'r') as file:
songs = file.readlines()
# this querys spotify does some magic and then appends the tracks spotify uri to an array
def query_song_uris():
for n, entry in enumerate(songs):
x = re.findall(r"'([^']*)'", entry)
title_len = len(entry) - len(x[0]) - 4
title = x[0]
artist = entry[:title_len]
payload = {
'q': (entry),
'track:': (title),
'artist:': (artist),
'type': 'track',
'limit': 1
}
url = base_url + search_method
try:
r = requests.get(url, params=payload, headers=auth)
print('\nquerying spotify; ', r)
c = r.content.decode('UTF-8')
dic = json.loads(c)
track_uri = dic["tracks"]["items"][0]["uri"]
playlist.append(track_uri)
print(track_uri)
except:
err = f'\nNr. {(len(songs)-n)}: ' + f'{entry}'
err_log.append(err)
playlist.reverse()
query_song_uris()
# creates a playlist and returns playlist id
def create_playlist():
payload = {
"name": "Rolling Stone: Top 500 (All Time)",
"description": "music for old men xD with occasional hip hop appearences. just kidding"
}
url = base_url + f'/users/{user_id}/playlists'
r = requests.post(url, headers=auth, json=payload)
c = r.content.decode('UTF-8')
dic = json.loads(c)
print(f'\n\ncreating playlist #{dic["id"]}; ', r)
return dic["id"]
def add_to_playlist():
playlist_id = create_playlist()
while True:
if len(playlist) > 100:
p = playlist[:100]
else:
p = playlist
payload = {"uris": (p)}
url = base_url + f'/playlists/{playlist_id}/tracks'
r = requests.post(url, headers=auth, json=payload)
print(f'\nadding {len(p)} songs to playlist; ', r)
del playlist[ : len(p) ]
if len(playlist) == 0:
break
add_to_playlist()
print('\n\ncheck your spotify :)')
print("\n\n\nthese tracks didn't make it, check manually:\n")
for line in err_log:
print(line)
print('\n\n')
Done
If you don't want to run the code yourself, heres the playlist:
https://open.spotify.com/playlist/5fdLKYNFlA4XSvhEl36KXS
If you have trouble, everything from step 2 on is also described here in the Web API quick start or in general in the web API docs.
Regarding Apple Music
So Apple seems very closed up (surprise haha). What I found though is that you can query the i-Tunes store. Given response also contains a direct link to the song(s) on Apple music.
You might be able to go from there.
Get ISRC code from iTunes Search API (Apple music)
PS: undeniably regex is witchcraft, but y'all here got my back
As of right now I'm attempting to make a simple music player app that streams music or video directly from a Youtube URL, and in order to do that I need the full download of the search page that's used to search for videos to stream. But I'm having some problems with the urlopen module in python 3, which is what I'm using to make the command application. It won't load the ytd-app tag on Youtube, which is what a good deal of the video and playlist references are put on when you first load the search. Anyone know what's going on, or know some type of workaround for it? Thanks!
My code so far:
BASICURL = "https://www.youtube.com/results?"
query = query.split()
ret = ""
stufffound = {}
for x in query:
ret = ret + x + "+"
ret = (ret[:len(ret)-1])
# URL BUILDER
if filtercriteria:
URL = BASICURL + "sp={0}".format(filtercriteria) + "&search_query={0}".format(ret)
else:
URL = BASICURL + "search_query={0}".format(ret)
query = urlopen(str(URL))
passdict = {}
def findvideosonpage(query,dictToAddTo):
for x in (BS(urlopen(query)).read()).findAll(attrs={'class':'yt-simple-endpoint style-scope ytd-video-renderer'})
dictToAddTo[query.index(x)] = x[href]
print(x)
return list([x for _,x in sorted(zip(dictToAddTo.values(), dictToAddTo.keys()))])
# Dictionary is meant to be converted into a list later to order the results
I'm working on interacting with the google flights api (qpx). I am using the following link and working with the following experimental package to feed in information for a request:
https://github.com/rweyant/googleflights
Below is the code I have thus far for anyone interested in replicating my results:
#call library and data-------------------------------------------------------------------
library(googleflights)
library(MUCflights) #to access airport codes
data("airports")
#codes for countries i'm interested in------------------------------------------
code_list = airports
#later interface for updating codes
my_destinations = matrix(c("San Juan", "Amsterdam", "Berlin",
"San Diego", "Lima", "Cali", "Havana"))
my_home = matrix(c("LGA", "JFK"))
#loop extract
code_list = airports
code_bucket = NULL
for (i in my_destinations) {
print(i)
drop = code_list[code_list$City == i,c("City","IATA")]
drop = as.data.frame(drop)
print(drop)
code_bucket = rbind(code_bucket, drop)
code_bucket = as.data.frame(code_bucket)
}
#clean my code bucket---------------------------------------------------------------
code_bucket = na.omit(code_bucket)
code_bucket = code_bucket[code_bucket$IATA != "",]
code_bucket
#feed in codes into function---------------------------------------------------------
#each ping to QPX will combine NYC to x
#data i want
# pricing
# times
key = "(key is here)"
set_apikey(key)
result_flights = search(my_home[1], code_bucket[2,2], "2016-11-27", "2016-11-28")
I've been looking through the package details to understand the functionality and noticed that the request comes back as a list as opposed to a JSON, which seems to be for the application of a "summarise_segment" function that isn't working for me. Here is the link to the function I'm referencing:
https://github.com/rweyant/googleflights/blob/master/R/unpack.R
I'm wondering if anyone has any luck or ideas for parsing out the request that returns? The resulting list is large and I'm reaching the limits of my knowledge on dealing with these structures. Any help in pointing me in the right direction would be appreciated!