I have just came to an article called The 500 Greatest Songs of All Time and thought "oh that's cool I bet they also made a Spotify/Apple music list that I can follow". Well...they don't.
So in a nutshell, I wonder if it's possible to 1) scrap the website to extract the songs and 2) then do some kind of bulk upload to Spotify to create the list.
Songs' titles and authors are structured like this in the website:
Website screenshot. I have already tried to scrap the web with the importxml() formula in google sheets but with no success.
I understand the scrapping part is easier than the other and, as I am new to programming, I would be happy to manage to partially achieve this goal. I am sure this task can be achieved easily on python.
I feel like explaining everything would go beyond the scope here, so I tried to comment the code well enough.
1. Scrape the songs
I used python3 and selenium, their website doesn't block that.
Be sure to adjust your chromedriver path, and the output path of the .txt file at the bottom if necessary. Once it's done and you have your .txt file you can close it.
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
s = Service(r'/Users/main/Desktop/chromedriver')
driver = webdriver.Chrome(service=s)
# just setting some vars, I used Xpath because I know that
top_500 = 'https://www.rollingstone.com/music/music-lists/best-songs-of-all-time-1224767/'
cookie_button_xpath = "// button [#id = 'onetrust-accept-btn-handler']"
div_containing_links_xpath = "// div [#id = 'pmc-gallery-list-nav-bar-render'] // child :: a"
song_names_xpath = "// article [#class = 'c-gallery-vertical-album'] / child :: h2"
links = []
songs = []
driver.get(top_500)
# accept cookies, give time to load
time.sleep(3)
cookie_btn = driver.find_element(By.XPATH, cookie_button_xpath)
cookie_btn.click()
time.sleep(1)
# extracting all the links since there are only 50 songs per page
links_to_next_pages = driver.find_elements(By.XPATH, div_containing_links_xpath)
for element in links_to_next_pages:
l = element.get_attribute('href')
links.append(l)
# extracting the songs, then going to next page and so on until we hit 500
counter = 1 # were starting with 1 here since links[0] is the current page we are already on
while True:
list = driver.find_elements(By.XPATH, song_names_xpath)
for element in list:
s = element.text
songs.append(s)
if len(songs) == 500:
break
driver.get(links[counter])
counter += 1
time.sleep(2)
# verify that there are no duplicates, if there were, something would be off
if len(songs) != len( set(songs) ):
print('you f***** up')
else:
print('seems fine')
with open('/Users/main/Desktop/output_songs.txt', 'w') as file:
file.writelines(line + '\n' for line in songs)
2. Prepare Spotify
Go to the Spotify Developer Dashboard and create an
account (use your Spotify acc).
Then create an app, call it whatever you want.
On your app click settings and whitelist http://localhost:8888/callback
On your app click "users and access" and add your Spotify account
Leave the tab open, we'll come back to it
3. Prepare Your Environment
You need Node.js so make sure that is installed on your machine
Download this from Spotifys GitHub
Unzip it, cd into the folder and run npm install
Go into the authorization_code folder and open app.js in a editor
Find var scope and append ' playlist-modify-public' to the string, this is so that your app can access you Spotify playlists, see here
Now go back to the app in your Spotify Developer Dashboard we'll need to copy the Client ID and the Client Secret into the var client_id and var client_secret respectively (in the app.js file). var redirect_uri will be
http://localhost:8888/callback - don't forget to save your changes.
4. Run the Spotify side of things
cd into the authorization_code folder and run app.js with node app.js (this is basically a server running on your PC)
Now if that works leave it running and go to http://localhost:8888, authorise your Spotify account there
There copy the full token, including the overflow, use inspect element to get it
Adjust the user_id and auth variables as well as the path to the output_songs.txt (at with open) in the following python script and run that, songs which are not found will be printed out at the end, give it a search with Google. They are usually on Spotify as well but Google seem to have the better search algorithm (surprised Pikachu face).
import requests
import re
import json
# this is NOT you display name, it's your user name!!
user_id = 'YOUR_USERNAME'
# paste your auth token from spotify; it can time out then you have to get a new one, so dont panic if you get a bunch of responses in the 400s after some time
auth = {"Authorization": "Bearer YOUR_AUTH_KEY_FROM_LOCALHOST"}
playlist = []
err_log = []
base_url = 'https://api.spotify.com/v1'
search_method = '/search'
with open('/Users/main/Desktop/output_songs.txt', 'r') as file:
songs = file.readlines()
# this querys spotify does some magic and then appends the tracks spotify uri to an array
def query_song_uris():
for n, entry in enumerate(songs):
x = re.findall(r"'([^']*)'", entry)
title_len = len(entry) - len(x[0]) - 4
title = x[0]
artist = entry[:title_len]
payload = {
'q': (entry),
'track:': (title),
'artist:': (artist),
'type': 'track',
'limit': 1
}
url = base_url + search_method
try:
r = requests.get(url, params=payload, headers=auth)
print('\nquerying spotify; ', r)
c = r.content.decode('UTF-8')
dic = json.loads(c)
track_uri = dic["tracks"]["items"][0]["uri"]
playlist.append(track_uri)
print(track_uri)
except:
err = f'\nNr. {(len(songs)-n)}: ' + f'{entry}'
err_log.append(err)
playlist.reverse()
query_song_uris()
# creates a playlist and returns playlist id
def create_playlist():
payload = {
"name": "Rolling Stone: Top 500 (All Time)",
"description": "music for old men xD with occasional hip hop appearences. just kidding"
}
url = base_url + f'/users/{user_id}/playlists'
r = requests.post(url, headers=auth, json=payload)
c = r.content.decode('UTF-8')
dic = json.loads(c)
print(f'\n\ncreating playlist #{dic["id"]}; ', r)
return dic["id"]
def add_to_playlist():
playlist_id = create_playlist()
while True:
if len(playlist) > 100:
p = playlist[:100]
else:
p = playlist
payload = {"uris": (p)}
url = base_url + f'/playlists/{playlist_id}/tracks'
r = requests.post(url, headers=auth, json=payload)
print(f'\nadding {len(p)} songs to playlist; ', r)
del playlist[ : len(p) ]
if len(playlist) == 0:
break
add_to_playlist()
print('\n\ncheck your spotify :)')
print("\n\n\nthese tracks didn't make it, check manually:\n")
for line in err_log:
print(line)
print('\n\n')
Done
If you don't want to run the code yourself, heres the playlist:
https://open.spotify.com/playlist/5fdLKYNFlA4XSvhEl36KXS
If you have trouble, everything from step 2 on is also described here in the Web API quick start or in general in the web API docs.
Regarding Apple Music
So Apple seems very closed up (surprise haha). What I found though is that you can query the i-Tunes store. Given response also contains a direct link to the song(s) on Apple music.
You might be able to go from there.
Get ISRC code from iTunes Search API (Apple music)
PS: undeniably regex is witchcraft, but y'all here got my back
I am trying to grab all the files created under a parent directory. The parent directory has a lot of sub directories followed by files in those directories.
parent
--- sub folder1
--- file1
--- file2
Currently I am grabbing all the ids of sub folders and constructing a query such as q: 'subfolder1id' in parents or 'subfolder2id' in parents to find the list of files. Then I issue these in batches. If I have 100 folders, I issue 10 search queries for a batch size of 10.
Is there a better way of querying the files using google drive rest api that will get me all the files with one query?
Here is an answer to your question.
Same idea from your scenario:
folderA____ folderA1____folderA1a
\____folderA2____folderA2a
\___folderA2b
There 3 alternative answers that I think you can get an idea from.
Alternative 1. Recursion
The temptation would be to list the children of folderA, for any
children that are folders, recursively list their children, rinse,
repeat. In a very small number of cases, this might be the best
approach, but for most, it has the following problems:-
It is woefully time consuming to do a server round trip for each sub folder. This does of course depend on the size of your tree, so if
you can guarantee that your tree size is small, it could be OK.
Alternative 2. The common parent
This works best if all of the files are being created by your app (ie.
you are using drive.file scope). As well as the folder hierarchy
above, create a dummy parent folder called say "MyAppCommonParent". As
you create each file as a child of its particular Folder, you also
make it a child of MyAppCommonParent. This becomes a lot more
intuitive if you remember to think of Folders as labels. You can now
easily retrieve all descdendants by simply querying MyAppCommonParent
in parents.
Alternative 3. Folders first
Start by getting all folders. Yep, all of them. Once you have them all
in memory, you can crawl through their parents properties and build
your tree structure and list of Folder IDs. You can then do a single
files.list?q='folderA' in parents or 'folderA1' in parents or
'folderA1a' in parents.... Using this technique you can get
everything in two http calls.
Alternative 2 is the most effificient, but only works if you have
control of file creation. Alternative 3 is generally more efficient
than Alternative 1, but there may be certain small tree sizes where 1
is best.
scope = ['https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name('your JSON credentials' % path, scope)
service = build('drive', 'v3', credentials=credentials)
folder_tree = "NAME OF THE FOLDER YOU WANT TO START YOUR SEARCH"
folder_ids = {}
folder_ids['NAME OF THE FOLDER YOU WANT TO START YOUR SEARCH'] = folder_id
def check_for_subfolders(folder_id):
new_sub_patterns = {}
folders = service.files().list(q="mimeType='application/vnd.google-apps.folder' and parents in '"+folder_id+"' and trashed = false",fields="nextPageToken, files(id, name)",pageSize=400).execute()
all_folders = folders.get('files', [])
all_files = check_for_files(folder_id)
n_files = len(all_files)
n_folders = len(all_folders)
old_folder_tree = folder_tree
if n_folders != 0:
for i,folder in enumerate(all_folders):
folder_name = folder['name']
subfolder_pattern = old_folder_tree + '/'+ folder_name
new_pattern = subfolder_pattern
new_sub_patterns[subfolder_pattern] = folder['id']
print('New Pattern:', new_pattern)
all_files = check_for_files(folder['id'])
n_files =len(all_files)
new_folder_tree = new_pattern
if n_files != 0:
for file in all_files:
file_name = file['name']
new_file_tree_pattern = subfolder_pattern + "/" + file_name
new_sub_patterns[new_file_tree_pattern] = file['id']
print("Files added :", file_name)
else:
print('No Files Found')
else:
all_files = check_for_files(folder_id)
n_files = len(all_files)
if n_files != 0:
for file in all_files:
file_name = file['name']
subfolders[folder_tree + '/'+file_name] = file['id']
new_file_tree_pattern = subfolder_pattern + "/" + file_name
new_sub_patterns[new_file_tree_pattern] = file['id']
print("Files added :", file_name)
return new_sub_patterns
def check_for_files(folder_id):
other_files = service.files().list(q="mimeType!='application/vnd.google-apps.folder' and parents in '"+folder_id+"' and trashed = false",fields="nextPageToken, files(id, name)",pageSize=400).execute()
all_other_files = other_files.get('files', [])
return all_other_files
def get_folder_tree(folder_id):
global folder_tree
sub_folders = check_for_subfolders(folder_id)
for i,sub_folder_id in enumerate(sub_folders.values()):
folder_tree = list(sub_folders.keys() )[i]
print('Current Folder Tree : ', folder_tree)
folder_ids.update(sub_folders)
print('****************************************Recursive Search Begins**********************************************')
try:
get_folder_tree(sub_folder_id)
except:
print('---------------------------------No furtherance----------------------------------------------')
return folder_ids
folder_ids = get_folder_tree(folder_id)
We face code quality issues because of inline mysql queries. Having self-written mysql queries really clutters the code and also increases code base etc.
Our code is cluttered with stuff like
/* beautify ignore:start */
/* jshint ignore:start */
var sql = "SELECT *"
+" ,DATE_ADD(sc.created_at,INTERVAL 14 DAY) AS duedate"
+" ,distance_mail(?,?,lat,lon) as distance,count(pks.skill_id) c1"
+" ,count(ps.profile_id) c2"
+" FROM TABLE sc"
+" JOIN "
+" PACKAGE_V psc on sc.id = psc.s_id "
+" JOIN "
+" PACKAGE_SKILL pks on pks.package_id = psc.package_id "
+" LEFT JOIN PROFILE_SKILL ps on ps.skill_id = pks.skill_id and ps.profile_id = ?"
+" WHERE sc.type in "
+" ('a',"
+" 'b',"
+" 'c' ,"
+" 'd',"
+" 'e',"
+" 'f',"
+" 'g',"
+" 'h')"
+" AND sc.status = 'open'"
+" AND sc.crowd_type = ?"
+" AND sc.created_at < DATE_SUB(NOW(),INTERVAL 10 MINUTE) "
+" AND sc.created_at > DATE_SUB(NOW(),INTERVAL 14 DAY)"
+" AND distance_mail(?, ?,lat,lon) < 500"
+" GROUP BY sc.id"
+" HAVING c1 = c2 "
+" ORDER BY distance;";
/* jshint ignore:end */
/* beautify ignore:end */
I had to blur the code a little bit.
As you can see, having this repeatedly in your code is just unreadable. Also because atm we can not go to ES6, which would at least pretty the string a little bit thanks to multi-line strings.
The question now is, is there a way to store that SQL procedures in one place? As additional information, we use node (~0.12) and express to expose an API, accessing a MySQL db.
I already thought about, using a JSON, which will result in an even bigger mess. Plus it may not even be possible since the charset for JSON is a little bit strict and the JSON will probably not like having multi line strings too.
Then I came up with the idea to store the SQL in a file and load at startup of the node app. This is at the moment my best shot to get the SQL queries at ONE place and offering them to the rest of the node modules.
Question here is, use ONE file? Use one file per query? Use one file per database table?
Any help is appreciated, I can not be the first on the planet solving this so maybe someone has a working, nice solution!
PS: I tried using libs like squel but that does not really help, since our queries are complex as you can see. It is mainly about getting OUR queries into a "query central".
I prefer putting every bigger query in one file. This way you can have syntax highlighting and it's easy to load on server start. To structure this, i usually have one folder for all queries and inside that one folder for each model.
# queries/mymodel/select.mymodel.sql
SELECT * FROM mymodel;
// in mymodel.js
const fs = require('fs');
const queries = {
select: fs.readFileSync(__dirname + '/queries/mymodel/select.mymodel.sql', 'utf8')
};
I suggest you store your queries in .sql files away from your js code. This will separate the concerns and make both code & queries much more readable. You should have different directories with nested structure based on your business.
eg:
queries
├── global.sql
├── products
│ └── select.sql
└── users
└── select.sql
Now, you just need to require all these files at application startup. You can either do it manually or use some logic. The code below will read all the files (sync) and produce an object with the same hierarchy as the folder above
var glob = require('glob')
var _ = require('lodash')
var fs = require('fs')
// directory containing all queries (in nested folders)
var queriesDirectory = 'queries'
// get all sql files in dir and sub dirs
var files = glob.sync(queriesDirectory + '/**/*.sql', {})
// create object to store all queries
var queries = {}
_.each(files, function(file){
// 1. read file text
var queryText = fs.readFileSync(__dirname + '/' + file, 'utf8')
// 2. store into object
// create regex for directory name
var directoryNameReg = new RegExp("^" + queriesDirectory + "/")
// get the property path to set in the final object, eg: model.queryName
var queryPath = file
// remove directory name
.replace(directoryNameReg,'')
// remove extension
.replace(/\.sql/,'')
// replace '/' with '.'
.replace(/\//g, '.')
// use lodash to set the nested properties
_.set(queries, queryPath, queryText)
})
// final object with all queries according to nested folder structure
console.log(queries)
log output
{
global: '-- global query if needed\n',
products: {
select: 'select * from products\n'
},
users: {
select: 'select * from users\n'
}
}
so you can access all queries like this queries.users.select
Put your query into database procedure and call procedure in the code, when it is needed.
create procedure sp_query()
select * from table1;
There are a few things you want to do. First, you want to store multi-line without ES6. You can take advantage of toString of a function.
var getComment = function(fx) {
var str = fx.toString();
return str.substring(str.indexOf('/*') + 2, str.indexOf('*/'));
},
queryA = function() {
/*
select blah
from tableA
where whatever = condition
*/
}
console.log(getComment(queryA));
You can now create a module and store lots of these functions. For example:
//Name it something like salesQry.js under the root directory of your node project.
var getComment = function(fx) {
var str = fx.toString();
return str.substring(str.indexOf('/*') + 2, str.indexOf('*/'));
},
query = {};
query.template = getComment(function() { /*Put query here*/ });
query.b = getComment(function() {
/*
SELECT *
,DATE_ADD(sc.created_at,INTERVAL 14 DAY) AS duedate
,distance_mail(?,?,lat,lon) as distance,count(pks.skill_id) c1
,count(ps.profile_id) c2
FROM TABLE sc
JOIN PACKAGE_V psc on sc.id = psc.s_id
JOIN PACKAGE_SKILL pks on pks.package_id = psc.package_id
LEFT JOIN PROFILE_SKILL ps on ps.skill_id = pks.skill_id AND ps.profile_id = ?
WHERE sc.type in ('a','b','c','d','e','f','g','h')
AND sc.status = 'open'
AND sc.crowd_type = ?
AND sc.created_at < DATE_SUB(NOW(),INTERVAL 10 MINUTE)
AND sc.created_at > DATE_SUB(NOW(),INTERVAL 14 DAY)
AND distance_mail(?, ?,lat,lon) < 500
GROUP BY sc.id
HAVING c1 = c2
ORDER BY distance;
*/
});
//Debug
console.log(query.template);
console.log(query.b);
//module.exports.query = query //Uncomment this.
You can require the necessary packages and build your logic right in this module or build a generic wrapper module for better OO design.
//Name it something like SQL.js. in the root directory of your node project.
var mysql = require('mysql'),
connection = mysql.createConnection({
host: 'localhost',
user: 'me',
password: 'secret',
database: 'my_db'
});
module.exports.load = function(moduleName) {
var SQL = require(moduleName);
return {
query: function(statement, param, callback) {
connection.connect();
connection.query(SQL[statement], param, function(err, results) {
connection.end();
callback(err, result);
});
}
});
To use it, you do something like:
var Sql = require ('./SQL.js').load('./SalesQry.js');
Sql.query('b', param, function (err, results) {
...
});
I come from different platform, so I'm not sure if this is what you are looking for. like your application, we had many template queries and we don't like having it hard-coded in the application.
We created a table in MySQL, allowing to save Template_Name (unique), Template_SQL.
We then wrote a small function within our application that returns the SQL template.
something like this:
SQL = fn_get_template_sql(Template_name);
we then process the SQL something like this:
pseudo:
if SQL is not empty
SQL = replace all parameters// use escape mysql strings from your parameter
execute the SQL
or you could read the SQL, create connection and add parameters using your safest way.
This allows you to edit the template query where and whenever. You can create an audit table for the template table capturing all previous changes to revert back to previous template if needed. You can extend the table and capture who and when was the SQL last edited.
from performance point of view, this would work as on-the-fly plus you don't have to read any files or restart server when you are depending on starting-server process when adding new templates.
You could create a completely new npm module let's assume the custom-queries module and put all your complex queries in there.
Then you can categorize all your queries by resource and by action. For example, the dir structure can be:
/index.js -> it will bootstrap all the resources
/queries
/queries/sc (random name)
/queries/psc (random name)
/queries/complex (random name)
The following query can live under the /queries/complex directory in its own file and the file will have a descriptive name (let's assume retrieveDistance)
// You can define some placeholders within this var because possibly you would like to be a bit configurable and reuseable in different parts of your code.
/* jshint ignore:start */
var sql = "SELECT *"
+" ,DATE_ADD(sc.created_at,INTERVAL 14 DAY) AS duedate"
+" ,distance_mail(?,?,lat,lon) as distance,count(pks.skill_id) c1"
+" ,count(ps.profile_id) c2"
+" FROM TABLE sc"
+" JOIN "
+" PACKAGE_V psc on sc.id = psc.s_id "
+" JOIN "
+" PACKAGE_SKILL pks on pks.package_id = psc.package_id "
+" LEFT JOIN PROFILE_SKILL ps on ps.skill_id = pks.skill_id and ps.profile_id = ?"
+" WHERE sc.type in "
+" ('a',"
+" 'b',"
+" 'c' ,"
+" 'd',"
+" 'e',"
+" 'f',"
+" 'g',"
+" 'h')"
+" AND sc.status = 'open'"
+" AND sc.crowd_type = ?"
+" AND sc.created_at < DATE_SUB(NOW(),INTERVAL 10 MINUTE) "
+" AND sc.created_at > DATE_SUB(NOW(),INTERVAL 14 DAY)"
+" AND distance_mail(?, ?,lat,lon) < 500"
+" GROUP BY sc.id"
+" HAVING c1 = c2 "
+" ORDER BY distance;";
/* jshint ignore:end */
module.exports = sql;
The top level index.js will export an object with all the complex queries. An example can be:
var sc = require('./queries/sc');
var psc = require('./queries/psc');
var complex = require('./queries/complex');
// Quite important because you want to ensure that no one will touch the queries outside of
// the scope of this module. Be careful, because the Object.freeze is freezing only the top
// level elements of the object and it is not recursively freezing the nested objects.
var queries = Object.freeze({
sc: sc,
psc: psc,
complex: complex
});
module.exports = queries;
Finally, on your main code you can use the module like that:
var cq = require('custom-queries');
var retrieveDistanceQuery = cq.complex.retrieveDistance;
// #todo: replace the placeholders if they exist
Doing something like that you will move all the noise of the string concatenation to another place that you would expect and you will be able to find quite easily in one place all your complex queries.
This is no doubt a million dollar question, and I think the right solution depends always on the case.
Here goes my thoughts. Hope could help:
One simple trick (which, in fact, I read that it is surprisingly more efficient than joining strings with "+") is to use arrays of strings for each row and join them.
It continues being a mess but, at least for me, a bit clearer (specially when using, as I do, "\n" as separator instead of spaces, to make resulting strings more readable when printed out for debugging).
Example:
var sql = [
"select foo.bar",
"from baz",
"join foo on (",
" foo.bazId = baz.id",
")", // I always leave the last comma to avoid errors on possible query grow.
].join("\n"); // or .join(" ") if you prefer.
As a hint, I use that syntax in my own SQL "building" library. It may not work in too complex queries but, if you have cases in which provided parameters could vary, it is very helpful to avoid (also subotptimal) "coalesce" messes by fully removing unneeded query parts. It is also on GitHub, (and it isn't too complex code), so you can extend it if you feel it useful.
If you prefer separate files:
About having single or multiple files, having multiple files is less efficient from the point of view of reading efficiency (more file open/close overhead and harder OS level caching). But, if you load all of them single time at startup, it is not in fact a hardly noticeable difference.
So, the only drawback (for me) is that it is too hard to have a "global glance" of your query collection. Even, if you have very huge amount of queries, I think it is better to mix both approaches. That is: group related queries in the same file so you have single file per each module, submodel or whatever criteria you chosen.
Of course: Single file would result in relatively "huge" file, also difficult to handle "at first". But I (hardly) use vim's marker based folding (foldmethod=marker) which is very helpfull to handle that files.
Of course: if you don't (yet) use vim (truly??), you wouldn't have that option, but sure there is another alternative in your editor. If not, you always can use syntax folding and something like "function (my_tag) {" as markers.
For example:
---(Query 1)---------------------/*{{{*/
select foo from bar;
---------------------------------/*}}}*/
---(Query 2)---------------------/*{{{*/
select foo.baz
from foo
join bar using (foobar)
---------------------------------/*}}}*/
...when folded, I see it as:
+-- 3 línies: ---(Query 1)------------------------------------------------
+-- 5 línies: ---(Query 2)------------------------------------------------
Which, using properly selected labels, is much more handy to manage and, from the parsing point of view, is not difficult to parse the whole file splitting queries by that separation rows and using labels as keys to index the queries.
Dirty example:
#!/usr/bin/env node
"use strict";
var Fs = require("fs");
var src = Fs.readFileSync("./test.sql");
var queries = {};
var label = false;
String(src).split("\n").map(function(row){
var m = row.match(/^-+\((.*?)\)-+[/*{]*$/);
if (m) return queries[label = m[1].replace(" ", "_").toLowerCase()] = "";
if(row.match(/^-+[/*}]*$/)) return label = false;
if (label) queries[label] += row+"\n";
});
console.log(queries);
// { query_1: 'select foo from bar;\n',
// query_2: 'select foo.baz \nfrom foo\njoin bar using (foobar)\n' }
console.log(queries["query_1"]);
// select foo from bar;
console.log(queries["query_2"]);
// select foo.baz
// from foo
// join bar using (foobar)
Finally (idea), if you do as much effort, wouldn't be a bad idea to add some boolean mark together with each query label telling if that query is intended to be used frequently or only occasionally. Then you can use that information to prepare those statements at application startup or only when they are going to be used more than single time.
Can you create a view which that query.
Then select from the view
I don't see any parameters in the query so I suppose view creation is possible.
Create store procedures for all queries, and replace the var sql = "SELECT..." for calling the procedures like var sql = "CALL usp_get_packages".
This is the best for performance and no dependency breaks on the application. Depending on the number of queries may be a huge task, but for every aspect (maintainability, performance, dependencies, etc) is the best solution.
I'm late to the party, but if you want to store related queries in a single file, YAML is a good fit because it handles arbitrary whitespace better than pretty much any other data serialization format, and it has some other nice features like comments:
someQuery: |-
SELECT *
,DATE_ADD(sc.created_at,INTERVAL 14 DAY) AS duedate
,distance_mail(?,?,lat,lon) as distance,count(pks.skill_id) c1
,count(ps.profile_id) c2
FROM TABLE sc
-- ...
# Here's a comment explaining the following query
someOtherQuery: |-
SELECT 1;
This way, using a module like js-yaml you can easily load all of the queries into an object at startup and access each by a sensible name:
const fs = require('fs');
const jsyaml = require('js-yaml');
export default jsyaml.load(fs.readFileSync('queries.yml'));
Here's a snippet of it in action (using a template string instead of a file):
const yml =
`someQuery: |-
SELECT *
FROM TABLE sc;
someOtherQuery: |-
SELECT 1;`;
const queries = jsyaml.load(yml);
console.dir(queries);
console.log(queries.someQuery);
<script src="https://unpkg.com/js-yaml#3.8.1/dist/js-yaml.min.js"></script>
Another approach with separate files by using ES6 string templates.
Of course, this doesn't answer the original question because it requires ES6, but there is already an accepted answer which I'm not intending to replace. I simply thought that it is interesting from the point of view of the discussion about query storage and management alternatives.
// myQuery.sql.js
"use strict";
var p = module.parent;
var someVar = p ? '$1' : ':someVar'; // Comments if needed...
var someOtherVar = p ? '$2' : ':someOtherVar';
module.exports = `
--##sql##
select foo from bar
where x = ${someVar} and y = ${someOtherVar}
--##/sql##
`;
module.parent || console.log(module.exports);
// (or simply "p || console.log(module.exports);")
NOTE: This is the original (basic) approach. I
later evolved it adding some interesting improvements
(BONUS, BONUS 2 and FINAL EDIT sections). See the bottom of
this post for a full-featured snipet.
The advantages of this approach are:
Is very readable, even the little javascript overhead.
It also can be properly syntax higlighted (at least in Vim) both javascript and SQL sections.
Parameters are placed as readable variable names instead of silly "$1, $2", etc... and explicitly declared at the top of the file so it's simple to check in which order they must be provided.
Can be required as myQuery = require("path/to/myQuery.sql.js") obtaining valid query string with $1, $2, etc... positional parameters in the specified order.
But, also, can be directly executed with node path/to/myQuery.sql.js obtaining valid SQL to be executed in a sql interpreter
This way you can avoid the mess of copying forth and back the query and replace parameter specification (or values) each time from query testing environments to application code: Simply use the same file.
Note: I used PostgreSQL syntax for variable names. But with other databases, if different, it's pretty simple to adapt.
More than that: with a few more tweaks (see BONUS section), you can turn it in a viable console testing tool and:
Generate yet parametized sql by executing something like node myQueryFile.sql.js parameter1 parameter2 [...].
...or directly execute it by piping to your database console. Ex: node myQueryFile.sql.js some_parameter | psql -U myUser -h db_host db_name.
Even more: You also can tweak the query making it to behave slightly different when executed from console (see BONUS 2 section) avoiding to waste space displaying large but no meaningful data while keeping it when the query is read by the application that needs it.
And, of course: you can pipe it again to less -S to avoid line wrapping and be able to easily explore data by scrolling it both in horizontal and vertical directions.
Example:
(
echo "\set someVar 3"
echo "\set someOtherVar 'foo'"
node path/to/myQuery.sql.js
) | psql dbName
NOTES:
'##sql##' and '##/sql##' (or similar) labels are fully optional,
but very useful for proper syntax highlighting, at least in Vim.
This extra-plumbing is no more necessary (see BONUS section).
In fact, I actually doesn't write below (...) | psql... code directly to console but simply (in a vim buffer):
echo "\set someVar 3"
echo "\set someOtherVar 'foo'"
node path/to/myQuery.sql.js
...as many times as test conditions I want to test and execute them by visually selecting desired block and typing :!bash | psql ...
BONUS: (edit)
I ended up using this approach in many projects with just a simple modification that consist in changing last row(s):
module.parent || console.log(module.exports);
// (or simply "p || console.log(module.exports);")
...by:
p || console.log(
`
\\set someVar '''${process.argv[2]}'''
\\set someOtherVar '''${process.argv[3]}'''
`
+ module.exports
);
This way I can generate yet parametized queries from command line just by passing parameters normally as position arguments. Example:
myUser#myHost:~$ node myQuery.sql.js foo bar
\set someVar '''foo'''
\set someOtherVar '''bar'''
--##sql##
select foo from bar
where x = ${someVar} and y = ${someOtherVar}
--##/sql##
...and, better than that: I can pipe it to postgres (or any other database) console just like this:
myUser#myHost:~$ node myQuery.sql.js foo bar | psql -h dbHost -u dbUser dbName
foo
------
100
200
300
(3 rows)
This approach make it much more easy to test multiple values because you can simply use command line history to recover previous commands and just edit whatever you want.
BONUS 2:
Two few more tricks:
1. Sometimes we need to retrieve some columns with binary and/or large data that make it difficult to read from console and, in fact, we probaby even don't need to see them at all while testing the query.
In this cases we can take advantadge of the p variable to alter the output of the query and shorten, format more properly, or simply remove that column from the projection.
Examples:
Format: ${p ? jsonb_column : "jsonb_pretty("+jsonb_column+")"},
Shorten: ${p ? long_text : "substring("+long_text+")"},
Remove: ${p ? binary_data + "," : "" (notice that, in this case, I moved the comma inside the exprssion due to be able to avoid it in console version.
2. Not a trick in fact but just a reminder: We all know that to deal with large output in the console, we only need to pipe it to less command.
But, at least me, often forgive that, when ouput is table-aligned and too wide to fit in our terminal, there is the -S modifier to instruct less not to wrap and instead let us scroll text also in horizontal direction to explore the data.
Here full version of the original snipped with this change applied:
// myQuery.sql.js
"use strict";
var p = module.parent;
var someVar = p ? '$1' : ':someVar'; // Comments if needed...
var someOtherVar = p ? '$2' : ':someOtherVar';
module.exports = `
--##sql##
select
foo
, bar
, ${p ? baz : "jsonb_pretty("+baz+")"}
${p ? ", " + long_hash : ""}
from bar
where x = ${someVar} and y = ${someOtherVar}
--##/sql##
`;
p || console.log(
`
\\set someVar '''${process.argv[2]}'''
\\set someOtherVar '''${process.argv[3]}'''
`
+ module.exports
);
FINAL EDIT:
I have been evolving a lot more this concept until it became too wide to be strictly manually handled approach.
Finally, taking advantage of the great ES6+ Tagged Templates i implemented a much simpler library driven approach.
So, in case anyone could be interested in it, here it is: SQLTT
Call procedure in the code after putting query into the db procedure. #paval also already answered
you may also refer here.
create procedure sp_query()
select * from table1;
I have an operating JSON library which I use to load an array of tile IDs. When I double click main.lua directly from file explorer, it runs great, but when I open Corona Simulator and open my project from there or build my project and run it on my testing device, it gives me a null reference error when I attempt to use the data I loaded.
Here is the function to load a table from a JSON file:
function fileIO.loadJSONFile (fileName)
local path = fileName
local contents = ""
local loadingTable = {}
local file = io.open (path, "r")
print (file)
if file then
local contents = file:read ("*a")
loadingTable = json.decode (contents)
io.close (file)
return loadingTable
end
return nil
end
Here is the usage:
function wr:renderChunkFile (path)
local data = fileIO.loadJSONFile (path)
self:renderChunk (data)
end
function wr:renderChunk (data)
local a, b = 1
if (self.img ~= nil) then
a = #self.img + 1
self.img[a] = {}
else
self.img[1] = {}
end
if (self.chunks ~= nil) then
b = #self.chunks + 1
self.chunks[b] = display.newGroup ()
else
self.chunks[1] = display.newGroup ()
end
for i = 1, #data do -- Y axis ERROR IS HERE
self.img[a][i] = {}
for j = 1, #data[i] do -- Z axis
self.img[a][i][j] = {}
for k = 1, #data[i][j] do -- X axis
if (data[i + 1] ~= nil) then
if (data[i + 1][j][k] < self.transparentLimit) then
self.img[a][i][j][k] = display.newImage ("images/tiles/"..data[i][j][k]..".png", k*self.tileWidth, display.contentHeight -j*self.tileDepth - i*self.tileThickness)
self.chunks[b]:insert (self.img[a][i][j][k])
elseif(data[i + 1] == nil) then
self.img[a][i][j][k] = display.newImage ("images/tiles/"..data[i][j][k]..".png", k*self.tileWidth, display.contentHeight -j*self.tileDepth - i*self.tileThickness)
self.chunks[b]:insert (self.img[a][i][j][k])
end
end
end
end
end
end
When it gets to the line for i = 1, #data do it tells me it is trying to access the length of a nil field. Where did I go wrong here?
EDIT: I feel the need to give a more clear explanation of what my problem is. I am getting inconsistent results from this program. When I select main.lua in file explorer and open it with Corona Simulator, it works. When I open Corona Simulator and internally navigate to main.lua, it does not work. When I build the project and test it on my device, it does not work. What I really need is some insight into Corona's JSON library and APK internal directory structure requirements (directory nesting limits, naming restrictions, etc.). If someone thinks of something else that might cause the issue I am having, please bring it up! I am open to anything.
Without seeing the entire error message and not knowing what the value of "path" is it's going to be hard to speculate. But Corona SDK uses four base directories:
system.ResourceDirectory -- Same folder as main.lua and is read-only
system.DocumentsDirectory -- Your writable folder where your data lives
system.CachesDirectory -- for downloaded files
system.TemporaryDirectory -- for temp files.
The last three, while in the simulator are in the project's Sandbox master folder. On device who knows where the folders really are.
In your case, if your JSON file is going to be included in with your downloadble app, your .json file should be in the same folder with your main.lua (or a sub folder) and referenced in system.ResourceDirectory.
I currently have a MATLAB function that looks like this:
function outfile=multi_read(modelfrom,modelto,type)
models=[modelfrom:1:modelto];
num_models=length(models);
model_path='../MODELS/GRADIENT/'
for id=1:num_models
fn=[model_path num2str(models(id)) '/']; %Location of file to be read
outfile=model_read(fn,type); %model_read is a separate function
end
end
The idea of this function is to execute another function model_read for a series of files, and output these files to the workspace (not to disk). Note that the output from model_read is a structure! I want the function to save the file to the workspace using sequential names, similar to typing:
file1=multi_read(1,1,x)
file2=multi_read(2,2,x)
file3=multi_read(3,3,x)
etc.
which would give file1, file2 and file3 in the workspace, but instead by recalling the command only once, something like:
multi_read(1,3,x)
which would give the same workspace output.
Essentially my questions is, how do I get a function to output variables with multiple names without having to recall the function multiple times.
As suggested in the comment I would try this approach which is more robust, at least IMHO:
N = tot_num_of_your_files; %whatever it is
file = cellfun(#(i)multi_read(i,i,x),mat2cell(1:N,1,ones(1,N)),...
'UniformOutput' , false); %(x needs to be defined)
You will recover objects by doing file{i}.
Here is code to do what you ask:
for i = 1:3
istr=num2str(i)
line = ['file' istr '= multi_read(' istr ', ' istr ', x)']
eval(line)
end
Alternatively, here is code to do what you should want:
for i = 1:3
file{i} = multi_read(i,i,x)
end