How do I use MySQL for dynamic doc root with Nginx? - mysql

I've been trying to find out a way to first capture environment variable HOSTNAME and then use a MySQL query to fetch and return back to the Nginx conf the document root for our vhosts. We use them for dynamic doc roots currently in Apache but are migrating to Nginx.
example nginx.conf (might look something like this):
server {
listen 80;
# grab Environment variable HOSTNAME
$hostname= ENV(HOSTNAME);
# execute mysql query
$doc_root = mysql(select docroot from table where host = '$hostname' );
# set document root
root /var/www/$doc_root;
.....
I was exploring using Lua and https://github.com/openresty/lua-resty-mysql but have been unable to figure out how this could be done to capture HOSTNAME and mysql query as a variable and return the results back.

Thanks for your help. It didn't work for me, but after a lot of work, I finally got something working. This is for someone else if they ever need it
it turns out $http_host is already defined globally in nginx - so that was fixed.
set $httphost $http_host; # create and initialize var
set $docroot "";
# begin LUA scripting
rewrite_by_lua '
-- make sure http host is defined
if not ngx.var.httphost then
ngx.log(ngx.ERR,"ERROR - no httphost defined")
return
end
-- begin mysql
local mysql = require "resty.mysql"
local db, err = mysql:new()
db:set_timeout(1000) -- 1 sec
local ok, err, errno, sqlstate = db:connect
{
host = "127.0.0.1",
port = 3306,
database = "db",
user = "user",
password = "password",
max_packet_size = 1024 * 1024
}
if not ok then
ngx.log(ngx.ERR,"MySQL failed to connect: ", err, ": ", errno, " ", sqlstate)
return
end
-- prevent injection attack
local hname = ngx.unescape_uri(client)
local quoted_name = ngx.quote_sql_str(hname)
local sql = "select docroot from users where customer =" .. quoted_name
result,err,errno,sqlstate = db:query(sql,1)
if not result then
ngx.log(ngx.ERR,"MySQL bad result: ", err, ": ", errno, ": ", sqlstate, ".")
return
end
if not result[1].docroot then
ngx.log(ngx.ERR,"MySQL ERROR - no docroot was returned")
return
end
ngx.var.docroot = result[1].docroot
';
# now we can set the docroot for this host
root /var/www/$docroot;

First of all, using a database for doing basic routing does not sound like a very good idea - I would recommend having the results cached in memory and maybe refreshing them periodically from the database.
Second, the basic Nginx config file will get you only so far - in order to get more advanced functionality you will need to use a scripting language (like Lua) on top of it. One of the things this allows you is reading environment variables. I wrote about how to do it here:
https://docs.apitools.com/blog/2014/07/02/using-environment-variables-in-nginx-conf.html
The usual way to get Lua working on Nginx is using Openresty, a version of Nginx which comes with several modules pre-installed, including the Lua one. You can add lua-resty-mysql to the mix, and then do everything you want directly from Lua. I have never used lua-resty-mysql so I can't code that for you. If you are set on using Mysql, you will have to study its docs. Give a look at Redis while you are at it, it might be a better fit than Mysql.

Related

Compromised saveguard of data due to bad encoding usage?

I am using jupyter & python 3.6.4 via anaconda.
I want to be able to process and store data from python to a MySQL database.
The libraries I am using to do this arepymysql and sqlalchemy.
For now, I am testing this localy with wamp (mysql version : 5.7.21), later I will apply it to a distant server.
Database creation function:
def create_raw_mysql_db(host,user,password,db_name):
conn=pymysql.connect(host=host,user=user,password=password)
conn.cursor().execute('DROP DATABASE '+db_name)
conn.cursor().execute('CREATE DATABASE '+db_name+' CHARACTER SET utf8mb4')
Function to convert a Dataframe to a relational table in MySql:
def save_raw_to_mysql_db(df,table_name,db_name,if_exists,username,password,host_ip,port):
engine = create_engine("mysql+pymysql://"+username+":#"+host_ip+":"+port+"/"+db_name+"?charset=utf8mb4")
df.to_sql(name=table_name,con=engine,if_exists=if_exists,chunksize=10000)
The execution code:
#DB info & credentials
host = "localhost"
port = "3306"
user= "root"
password= ""
db_name= "raw_data"
exade_light_tb = "exade_light"
#A simple dataframe
df = pd.DataFrame(np.random.randint(low=0, high=10, size=(5, 5)),columns=['a', 'b', 'c', 'd', 'e'])
create_raw_mysql_db(host,user,password,db_name)
save_raw_to_mysql_db(df,exade_light_tb,db_name,"replace",user,password,host,port)
The warning I receive when I run this code:
C:\Users.... : Warning: (1366, "Incorrect string value: '\x92\xE9t\xE9)' for column 'VARIABLE_VALUE' at row 481")
result = self._query(query)
From these threads: /questions/34165523/ questions/47419943 questions/2108824/, I could conclude the problem must be related to the utf8 charset, but I am using utf8mb4 to create my db and I am not using Django (which supposedly also needed to be configured according to questions/2108824/).
My questions :
How is this warning really impacting my data and its integrity?
How come even though I change charset from utf8 to utf8mb4, it
doesn't seem to solve the warning? Do I need to configure something
further? In this case, what are the parameters I should keep in mind
to apply the same configuration to my distant server?
How do I get rid of this warning?
Annex:

How to fix mysql uppercase query in php and mysql

I am currently working on the website that uses ADODB library. In entire website all the queries are written in UPPERCASE.
The problem is when I run the query it doesn't work because of table name which is UPPERCASE. But when I change the table name to lowercase it works.
$sql = "SELECT * FROM MEMBERS where USERNAME = '$username'";
$db = ADONewConnection('mysql');
$db->debug = true;
$db->Connect(DB_HOSTNAME, DB_USERNAME, DB_PASSWORD, DB_NAME);
$resultFriends = $db->Execute($sql);
while ($row = $resultFriends->FetchRow()) {
var_dump($row);
die;
}
Here is the error I get:
ADOConnection._Execute(SELECT * FROM MEMBERS where USERNAME = 'fury', false) % line 1012, file: adodb.inc.php
ADOConnection.Execute(SELECT * FROM MEMBERS where USERNAME = 'fury') % line 15, file: index.php
Bear in mind I don't want to change the scripts. There are 1000 files and 10000 places.
Is there any library or are there any way that I can run this queries without error?
The version for live sire was linux kernel. but the new dev site is ubuntu.
I have done this on ubuntu/ mysql CML and it didn't work.
The solution is I had to reconfigure the mySql database in AWS/rdbs
You have to modify the “lower_case_table_names” parameter for your DB Instance(s). Prior to today, the lower_case_table_names parameter was not modifiable, with a system default of zero (0) or “table names stored as specified and comparisons are case sensitive.” Beginning immediately, values of zero and one (table names are stored in lowercase and comparisons are not case sensitive) are allowed. See the MySQL documentation for more information on the lower_case_table_names parameter.
The lower_case_table_names parameter can be specified via the rds-modify-db-parameter-group API. Simply include the parameter name and specify the desired value, such as in the following example:
rds-modify-db-parameter-group example --parameters "name=lower_case_table_names, value=1, method=pending-reboot" --region us-east-1
Support for modifying parameters via the AWS Management Console is expected to be added later this year.
setting the lower_case_table_names parameter via a custom DB Parameter Group and doing so before creating an associated DB Instance. Changing the parameter for existing DB Instances could cause inconsistencies with point-in-time recovery backups and with Read Replicas.
Amazon RDS

Autohotkey close database connection

I have made a script using the following library;
http://www.autohotkey.com/board/topic/72629-mysql-library-functions/
to connect to my non-local database. However I'm issuing some max_user_connections problems and I think this is due to the fact that I never close the database connection.
I can't seem to find a way to do that using this library but I am not certain, maybe theres a way to close any connection to the internet or any database or whatever that would work build-in in AHK?
Script:
hi() {
mysql := new mysql
db := mysql.connect("x","x","x","x") ; host,user,password,database
if db =
return
sql =
(
UPDATE something
SET yo = yo+1
WHERE id = 1
)
result := mysql.query(db, sql)
}
Thanks in advance
The DLL of the AHK script has the mysql_close function, but it's not coded into the AHK library.
You can technically manually call the DLL just like the AHK and see if it'll work.
Since I also need to connect to a MySQL DB via AHK, I'll update this answer when a full solution is available.

Pairing content on an external website with entries in an mySQL database

tl;dr: I'm looking for a way to find entries in our database which are missing information, getting that information from a website and adding it to the database entry.
We have a media management program which uses a mySQL table to store the information. When employees download media (video files, images, audio files) and import it into the media manager they are suppose to also copy the description of the media (from the source website) and add it to the description in the Media Manager. However this has not been done for thousands of files.
The file name (eg. file123.mov) is unique and the details page for that file can be accessed by going to a URL on the source website:
website.com/content/file123
The information we want to scrape from that page has an element ID which is always the same.
In my mind the process would be:
Connect to database and Load table
Filter: "format" is "Still Image (JPEG)"
Filter: "description" is "NULL"
Get first result
Get "FILENAME" without extension)
Load the URL: website.com/content/FILENAME
Copy contents of the element "description" (on website)
Paste contents into the "description" (SQL entry)
Get 2nd result
Rinse and repeat until last result is reached
My question(s) are:
Is there software that could perform such a task or is this something that would need to be scripted?
If scripted, what would be the best type of script (eg could I achieve this using AppleScript or would it need to be made in java or php etc.)
Is there software that could perform such a task or is this something that would need to be scripted?
I'm not aware of anything that will do what you want out of the box (and even if there was, the configuration required won't be much less work than the scripting involved in rolling your own solution).
If scripted, what would be the best type of script (eg could I achieve this using AppleScript or would it need to be made in java or php etc.)
AppleScript can't connect to databases, so you will definitely need to throw something else into the mix. If the choice is between Java and PHP (and you're equally familiar with both), I'd definitely recommend PHP for this purpose, as there will be considerably less code involved.
Your PHP script would look something like this:
$BASEURL = 'http://website.com/content/';
// connect to the database
$dbh = new PDO($DSN, $USERNAME, $PASSWORD);
// query for files without descriptions
$qry = $dbh->query("
SELECT FILENAME FROM mytable
WHERE format = 'Still Image (JPEG)' AND description IS NULL
");
// prepare an update statement
$update = $dbh->prepare('
UPDATE mytable SET description = :d WHERE FILENAME = :f
');
$update->bindParam(':d', $DESCRIPTION);
$update->bindParam(':f', $FILENAME);
// loop over the files
while ($FILENAME = $qry->fetchColumn()) {
// construct URL
$i = strrpos($FILENAME, '.');
$url = $BASEURL . (($i === false) ? $FILENAME : substr($FILENAME, 0, $i));
// fetch the document
$doc = new DOMDocument();
$doc->loadHTMLFile($url);
// get the description
$DESCRIPTION = $doc->getElementsById('description')->nodeValue;
// update the database
$update->execute();
}
I too am not aware of any existing software packages that will do everything you're looking for. However, Python can connect to your database, make web requests easily, and handle dirty html. Assuming you already have Python installed, you'll need three packages:
MySQLdb for connecting to the database.
Requests for easily making http web requests.
BeautifulSoup for robust parsing of html.
You can install these packages with pip commands or Windows installers. Appropriate instructions are on each site. The whole process won't take more than 10 minutes.
import MySQLdb as db
import os.path
import requests
from bs4 import BeautifulSoup
# Connect to the database. Fill in these fields as necessary.
con = db.connect(host='hostname', user='username', passwd='password',
db='dbname')
# Create and execute our SELECT sql statement.
select = con.cursor()
select.execute('SELECT filename FROM table_name \
WHERE format = ? AND description = NULL',
('Still Image (JPEG)',))
while True:
# Fetch a row from the result of the SELECT statement.
row = select.fetchone()
if row is None: break
# Use Python's built-in os.path.splitext to split the extension
# and get the url_name.
filename = row[0]
url_name = os.path.splitext(filename)[0]
url = 'http://www.website.com/content/' + url_name
# Make the web request. You may want to rate-limit your requests
# so that the website doesn't get angry. You can slow down the
# rate by inserting a pause with:
#
# import time # You can put this at the top with other imports
# time.sleep(1) # This will wait 1 second.
response = requests.get(url)
if response.status_code != 200:
# Don't worry about skipped urls. Just re-run this script
# on spurious or network-related errors.
print 'Error accessing:', url, 'SKIPPING'
continue
# Parse the result. BeautifulSoup does a great job handling
# mal-formed input.
soup = BeautifulSoup(response.content)
description = soup.find('div', {'id': 'description'}).contents
# And finally, update the database with another query.
update = db.cursor()
update.execute('UPDATE table_name SET description = ? \
WHERE filename = ?',
(description, filename))
I'll warn that I've made a good effort to make that code "look right" but I haven't actually tested it. You'll need to fill in the private details.
PHP is a good scraper . I have made a class that wraps the cURL port of PHP here:
http://semlabs.co.uk/journal/object-oriented-curl-class-with-multi-threading
You'll probably need to use some of the options:
http://www.php.net/manual/en/function.curl-setopt.php
To scrape HTML, I usually use regular expressions, but here is a class I made that should be able to query HTML without issues:
http://pastebin.com/Jm9jKjAU
Useage is:
$h = new HTMLQuery();
$h->load( $string_containing_html );
$h->getElements( 'p', 'id' ); // Returns all p tags with an id attribute
The best option to scrape would be XPath, but it can't handle dirty HTML. You can use that to do things like:
//div[#class = 'itm']/p[last() and text() = 'Hello World'] <- selects the last p in div elements that have the innerHTML 'Hello World'
You can use that in PHP with the DOM class (built-in).

Ruby on Rails - get MySql DB size

I want to know the current size of my MySql DB, to get that data I use the following methods:
def self.calculate_total_db_size
sql = "SELECT table_schema AS 'database',
sum( data_length + index_length ) / ( 1024 *1024 ) AS size
FROM information_schema.TABLES
WHERE ENGINE=('MyISAM' || 'InnoDB' )
AND table_schema = '#{get_current_db_name}'"
return perform_sql_query(sql)
end
def self.get_current_db_name
return Rails.configuration.database_configuration[Rails.env]["database"]
end
def self.perform_sql_query(query)
result = []
mysql_res = ActiveRecord::Base.connection.execute(query)
mysql_res.each_hash{ |res| result << res }
return result
end
this works great in my development and staging environment, but from some reason when i run it in production the query doesnt return any value, if i take the MySql query and run it manually on my production DB I get the correct values. why cant i do it through the application in production?
any thoughts?
I added some more logs and that helped me pinpoint the
problem, I was using:
Rails.configuration.database_configuration[Rails.env]["database"]
which returns an empty string when I was in production and not in any other
environment, I guess it is because in my database.yml there's a link to the
development setting under production (what makes the production settings the
same as the dev).
anyway, since i dont want to change my database.yml file i just changed the
way im getting the database name.
now it works great.
I'll wager $1 it's a permissions issue in production. Log into your production server and fire up the db console (go into your rails dir and type rails db to ensure you're using the same user as your app) and try to select from tables in the information_schema database.
The mysql account you're using in production most likely doesn't have access to the records you want in the information_schema database.