Apache2 No permission to write file [Errno 13] Permission denied Flask Python - json

A few Details first
So I did a little web application with Flask.
In theory it should get the ip whenever someone requests or visits the website.
I have everything done (On Windows my Code runs perfectly), but I installed Flask and moved my Project over to a Linux Server where I have Apache2 installed. Ive configured Apache so it handles the requests for the Flask web app.
Everything fine, like my templates load just fine, but the part with logging the ip doesn't work.
I think getting the IP is no problem, tho storing it in say a json file is.
Every time i try to run I get a 500 error on my website.
Apache Error Log : [Errno 13] Permission denied '/opt/iplogs/iplog.json'
The Python Code
def writeToJSONFile(path, fileName, data):
filePathNameWExt = path + fileName + '.json'
with open(filePathNameWExt, 'a') as fp:
json.dump(data, fp, indent=2)
fp.close()
#app.route("/")
def getIP():
visit = {}
ip_visit = request.remote_addr
now = datetime.now()
request_time = now.strftime("%d/%m/%Y %H:%M:%S")
visit["IP"] = str(ip_visit)
visit["date"] = str(request_time)
writeToJSONFile("/opt/iplogs/", "iplog", visit) # WHEN i comment this function out there is no 500 error
return render_template("home.html")
The Main Problem
So in Windows in a Development Envoirement it works fine, but also in linux when i just let Flask run without apache handling its requests
Only when I run the website through Apache I get the error "Permission denied"
So it has to do something with apache and its permissions to write?
Note the folder where my flask(python code) lives is completly different from where the ips are logged
+ I use Ubuntu and i didn't change anything regarding permissions with files or so, heck im even running through root (I know I shouldn't be doing that but its only for testing a very small project)
Thats all I can give you guys
Thanks for all the responses

Try this:
sudo chown -R www-data:www-data /opt/iplogs/
The Apache2 user www-data has no perrmission to manipulate this file.

Related

Heroku SSL connection error unsupported protocol

I have been using Heroku for a while to host my Discord bot. It has been connecting to a MySQL database hosted on ClearDB successfully. However, very recently, whenever I use the bot and it tries to connect to the database, it throws this error:
2026 (HY000): SSL connection error: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
It has been working completely fine until now, and I haven't changed anything. For background, all I did was delete a pipeline and make my app a standalone app without any pipeline. Just in case this helps.
Is this because Heroku has been updated? How can I fix my bot? Let me know if you need any more information.
Any help is appreciated, and Thank You in advance!
EDIT:
Database connection code:
import mysql.connector
def create_conn():
conn = None
try:
conn = mysql.connector.connect(host="HOST",
database="DB",
user="USER",
password="PWD")
except Exception as e:
print(e)
return conn
def execute_query(query, params, fetchall=True):
conn = create_conn()
if conn:
cursor = conn.cursor()
cursor.execute(query % params)
try:
if fetchall:
results = cursor.fetchall()
else:
results = cursor.fetchone()
except:
results = None
conn.commit()
cursor.close()
conn.close()
return results
else:
return False
The database connection used to work, and still works when I run it on my testing machine, a raspberry pi.
EDIT 2:
requirements.txt:
aiohttp==3.6.3
async-timeout==3.0.1
attrs==20.3.0
CacheControl==0.12.6
cachetools==4.2.0
certifi==2020.12.5
cffi==1.14.4
chardet==3.0.4
click==7.1.2
cryptography==3.3.1
cssselect==1.1.0
cssutils==1.0.2
discord==1.0.1
discord-pretty-help==1.2.0
discord.py==1.6.0
emoji==0.6.0
Flask==1.1.2
google-api-core==1.24.1
google-api-python-client==1.12.8
google-auth==1.24.0
google-auth-httplib2==0.0.4
google-cloud-core==1.5.0
google-cloud-firestore==2.0.2
google-cloud-storage==1.35.0
google-crc32c==1.1.0
google-resumable-media==1.2.0
googleapis-common-protos==1.52.0
grpcio==1.34.0
gunicorn==20.0.4
httplib2==0.18.1
idna==2.8
importlib-metadata==3.3.0
itsdangerous==1.1.0
jeepney==0.6.0
Jinja2==2.11.2
keyring==21.8.0
lxml==4.6.2
MarkupSafe==1.1.1
msgpack==1.0.2
multidict==4.7.6
mysql-connector-python==8.0.22
numpy==1.19.4
pandas==1.1.5
premailer==3.7.0
proto-plus==1.13.0
protobuf==3.14.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
python-dateutil==2.8.1
python-dotenv==0.15.0
pytz==2020.4
requests==2.25.1
rsa==4.7
schedule==0.6.0
SecretStorage==3.3.0
six==1.15.0
typing-extensions==3.7.4.3
uritemplate==3.0.1
urllib3==1.26.2
Werkzeug==1.0.1
yagmail==0.14.245
yarl==1.5.1
zipp==3.4.0
Just in case you can turn of ssl by:
conn = mysql.connector.connect(host="HOST",
database="DB",
user="USER",
password="PWD",
ssl_disabled=True)
i'm not quite sure how to do this, but i'm pretty sure you have to disable SSL for it to work, hope this helps.
Clearly, you need to enforce an SSL connection between your app and MySQL.
If you are using ruby stack then follow the given options and your SSL error problem will be solved.
Download the CA, Client, and Private Key files from your ClearDB dashboard and place them in the root of the application’s filesystem.
Make sure you have OpenSSL installed, which you can find here for Unix/Linux/OS X and here for Windows.
*Due to the MySQL client library configuration used on Heroku, you will need to strip the password from the private key file, which can be done like this:
$ openssl rsa -in cleardb_id-key.pem -out cleardb_id-key-no-password.pem
You can now delete the cleardb_id-key.pem and rename cleardb_id-key-no-password.pem to cleardb_id-key.pem, which you will use with your app.
*Set the DATABASE_URL config variable with the value of your modified CLEARDB_DATABASE_URL, like this:
$ heroku config:add DATABASE_URL="mysql2://abc1223:dfk243#us-cdbr-east.cleardb.com/my_heroku_db?
sslca=cleardb-ca-cert.pem&sslcert=cleardb_id-cert.pem&sslkey=cleardb_id-key.pem&reconnect=true"
notice how we added the “reconnect=true” parameters to the end of the URL? This is so that your application will automatically reconnect to ClearDB in the event of a connection timeout.
From here, simply restart your application (if Heroku didn’t already do that for you), and as long as you specified the correct file names and paths to the certificates in your DATABASE_URL, your app will now connect via SSL to ClearDB.

Azure Batch :Elevating the user privileges during Pool Creation using Azure CLI

I need to mount the azure file storage to Linux-Pools when they are being spun-up.I am following the instructions given here to achieve that: mounting Azure-File Storage to Batch Specically in my Azure CLI script under the Pools start commands I am inserting something which looks like this
--start-task-command-line="apt-get update && apt-get install cifs-utils && mkdir -p {} && mount -t cifs {} {} -o vers=3.0,username={},password={},dir_mode=0777,file_mode=0777,serverino".format(_COMPUTE_NODE_MOUNT_POINT, _STORAGE_ACCOUNT_SHARE_ENDPOINT, _COMPUTE_NODE_MOUNT_POINT, _STORAGE_ACCOUNT_NAME, _STORAGE_ACCOUNT_KEY)
but when I run the tasks with the auto-user that batch uses by default I get an error in the stderr.txt file mentioning that it was unable to create the "/mnt/MyAzureFileshare" directory and so my guess is the mounting didn't occur during the pool creation process.I saw a very similar question to the one I am facing:setting custom user identity for tasks and even the official Microsoft documentation goes over this in detail:Run Tasks under User accounts in Batch but none of them put a light on how to achieve this using Azure CLI.
In order to install specific packages so that Azure File Storage can be mounted requires sudo privileges and I am unable to do that through the Azure-CLI. In order to recreate the error I would recommend having a look at this:app to replicate the issue
What I want to achieve is:
1) Create a Pool with the Azure-File Storage mounted on it and elevate the privileges of the auto-user to the admin level using Azure CLI
2) Run tasks with the same auto-user with Admin Privileges using the azure CLI
Update 1:
I was able to mount Azure File Storage with Batch using the Azure CLI. I still am not able to populate the Azure File Storage with the output files of the app that I deployed on Batch Nodes.I have got no error in the stderr.txt files.
The output of the stderr.txt file is:
WARNING: In "login" auth mode, the following arguments are ignored: --account-key
Alive[################################################################] 100.0000%
Finished[#############################################################] 100.0000%
pdf--->png: 0%| | 0/1 [00:00<?, ?it/s]
pdf--->png: 100%|##########| 1/1 [00:00<00:00, 1.16it/s]WARNING: In "login" auth mode, the following arguments are ignored: --account-key
WARNING: uploading /mnt/batch/tasks/workitems/pdf-processing-job-2018-10-29-15-36-15/job-1/mytask-0/wd/png_files-2018-10-29-15-39-25/akronbeaconjournal_20180108_AkronBeaconJournal_0___page---0.png
Alive[################################################################] 100.0000%
Finished[#############################################################] 100.0000%
The Python App that was deployed on the Batch Nodes is:
import os
import fitz
import subprocess
import argparse
import time
from tqdm import tqdm
import sentry_sdk
import sys
import datetime
def azure_active_directory_login(azure_username,azure_password,azure_tenant):
try:
azure_login_output=subprocess.check_output(["az","login","--service-principal","--username",azure_username,"--password",azure_password,"--tenant",azure_tenant])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("Invalid Azure Login Credentials")
sys.exit("Invalid Azure Login Credentials")
def download_from_azure_blob(azure_storage_account,azure_storage_account_key,input_azure_container,file_to_process,pdf_docs_path):
file_to_download=os.path.join(input_azure_container,file_to_process)
try:
subprocess.check_output(["az","storage","blob","download","--container-name",input_azure_container,"--file",os.path.join(pdf_docs_path,file_to_process),"--name",file_to_process,"--account-key",azure_storage_account_key,\
"--account-name",azure_storage_account,"--auth-mode","login"])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("unable to download the pdf file")
sys.exit("unable to download the pdf file")
def pdf_to_png(input_folder_path,output_folder_path):
pdf_files=[x for x in os.listdir(input_folder_path) if x.endswith((".pdf",".PDF"))]
pdf_files.sort()
for pdf in tqdm(pdf_files,desc="pdf--->png"):
doc=fitz.open(os.path.join(input_folder_path,pdf))
page_count=doc.pageCount
for f in range(page_count):
page=doc.loadPage(f)
pix = page.getPixmap()
if pdf.endswith(".pdf"):
png_filename=pdf.split(".pdf")[0]+"___"+"page---"+str(f)+".png"
pix.writePNG(os.path.join(output_folder_path,png_filename))
elif pdf.endswith(".PDF"):
png_filename=pdf.split(".PDF")[0]+"___"+"page---"+str(f)+".png"
pix.writePNG(os.path.join(output_folder_path,png_filename))
def upload_to_azure_blob(azure_storage_account,azure_storage_account_key,output_azure_container,png_docs_path):
try:
subprocess.check_output(["az","storage","blob","upload-batch","--destination",output_azure_container,"--source",png_docs_path,"--account-key",azure_storage_account_key,\
"--account-name",azure_storage_account,"--auth-mode","login"])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("Unable to upload file to the container")
def upload_to_fileshare(png_docs_path):
try:
subprocess.check_output(["cp","-r",png_docs_path,"/mnt/MyAzureFileShare/"])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("unable to upload to azure file share ")
if __name__=="__main__":
#Credentials
sentry_sdk.init("<Sentry Creds>")
azure_username=<azure_username>
azure_password=<azure_password>
azure_tenant=<azure_tenant>
azure_storage_account=<azure_storage_account>
azure_storage_account_key=<azure_account_key>
try:
parser = argparse.ArgumentParser()
parser.add_argument("input_azure_container",type=str,help="Location to download files from")
parser.add_argument("output_azure_container",type=str,help="Location to upload files to")
parser.add_argument("file_to_process",type=str,help="file link in azure blob storage")
args = parser.parse_args()
timestamp = time.time()
timestamp_humanreadable= datetime.datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d-%H-%M-%S')
task_working_dir=os.getcwd()
file_to_process=args.file_to_process
input_azure_container=args.input_azure_container
output_azure_container=args.output_azure_container
pdf_docs_path=os.path.join(task_working_dir,"pdf_files"+"-"+timestamp_humanreadable)
png_docs_path=os.path.join(task_working_dir,"png_files"+"-"+timestamp_humanreadable)
os.mkdir(pdf_docs_path)
os.mkdir(png_docs_path)
except Exception as e:
sentry_sdk.capture_exception(e)
azure_active_directory_login(azure_username,azure_password,azure_tenant)
download_from_azure_blob(azure_storage_account,azure_storage_account_key,input_azure_container,file_to_process,pdf_docs_path)
pdf_to_png(pdf_docs_path,png_docs_path)
upload_to_azure_blob(azure_storage_account,azure_storage_account_key,output_azure_container,png_docs_path)
upload_to_fileshare(png_docs_path)
The upload_to_fileshare() in the python app above should initiate the upload but in my case nothing happens and there is no error in the copy operation in the stderr.txt files
Please let me know a way to troubleshoot this issue
It does not look like the run elevated parameter is exposed via a command line argument through the CLI. You can however specify a JSON file to the --json argument formatted as the REST API object to get all functionalities.

cx_freeze to access .json files

I have created an application for windows using pythons cx_freeze module. The application runs the openpyxl module which runs fine for the script but when frozen it fails to find the .constants.json files. The following error is displayed.
FileNotFoundError: [Errno 2] No such file or directory: 'C:....\exe.win-amd64-3.4\library.zip\openpyxl.constants.json'
I have found a fix for this (https://cx-freeze.readthedocs.org/en/latest/faq.html#using-data-files) detailed below :
def find_data_file(filename):
if getattr(sys, 'frozen', False):
# The application is frozen
datadir = os.path.dirname(sys.executable)
else:
# The application is not frozen
# Change this bit to match where you store your data files:
datadir = os.path.dirname(__file__)
return os.path.join(datadir, filename)
The question I have is where do I paste this code? Does it go in the setup.py file? Or somewhere else?

OPENSHIFT - 'wsgi.py' does not contain WSGI application 'application'

I am trying to start a new python3.3 app on Openshift. Due to the changes they made a couple of weeks ago I am having trouble getting the new app started. I copied a known working app into a new Openshift app. I am now getting this error:
Target WSGI script '/var/lib/openshift/<masked>/app-root/runtime/repo/wsgi.py' does not contain WSGI application 'application'.
My wsgi.py code is the same in both working and non working apps
import os
from myapp import main_production_no_pserve
if __name__ == '__main__':
ip = os.environ['OPENSHIFT_PYTHON_IP']
port = int(os.environ['OPENSHIFT_PYTHON_PORT'])
app = main_production_no_pserve(global_config=None)
from waitress import serve
print("Starting Waitress Server on http://{0}:{1}".format(ip, port))
serve(app, host=ip, port=port, threads=50)
The working app was created before the mid March Openshift change. Something is different now that the exact same code does not work.
It seems openshift is looking for an application function entry point within the wsgi.py. In the old method of the working app, the wsgi.py is actually executed (not looking for the application function. For kicks, I did this based off of the default wsgi.py when new openshift app was created.
import os
from myapp import main_production_no_pserve
def application(environ, start_response):
ip = os.environ['OPENSHIFT_PYTHON_IP']
port = int(os.environ['OPENSHIFT_PYTHON_PORT'])
app = main_production_no_pserve(global_config=None)
from waitress import serve
print("Starting Waitress Server on http://{0}:{1}".format(ip, port))
serve(app, host=ip, port=port, threads=50)
return
With that change I get the dreaded OSError: [Errno 98] Address already in use and a rhc app-force-stop does nothing.
I am at a loss why one app works and another does not. I would appreciate some help. :)

Smtp error 451 Temporary local - please try later on Cpanel Server

I have a Cpanel Server.
It send emails correctly expect from 1 domain which hosted on the server , so when I try to send email from that domain using roundcube or Horde I got the errror
SMTP Error (451): Failed to add recipient "recipient#exmple.com" (Temporary local problem - please try later).
does anyone know why and how to fix this?
I found the porblem:
After reviewing the file /var/log/exim_mainlog using
tail -f /var/log/exim_mainlog
I noticed that the error was:
2013-05-29 20:04:28 SMTP connection from [127.0.0.1]:36797 (TCP/IP connection count = 1)
2013-05-29 20:04:28 lowest numbered MX record points to local host: domain.com (while verifying <user#domain.com> from host localhost.localdomain (domain.com) [127.0.0.1]:36797)
2013-05-29 20:04:28 H=localhost.localdomain (domain.com) [127.0.0.1]:36797 sender verify defer for <user#domain.com>: lowest numbered MX record points to local host
2013-05-29 20:04:28 H=localhost.localdomain (domain.com) [127.0.0.1]:36797 F=<user#domain.com> A=dovecot_login:narena temporarily rejected RCPT <recipient#exmple.com>: Could not complete sender verify
2013-05-29 20:04:28 SMTP connection from localhost.localdomain (domain.com) [127.0.0.1]:36797 closed by QUIT
so the main problem was:
lowest numbered MX record points to local host
after couple of search I found the soluation in http://forums.cpanel.net/f5/lowest-numbered-mx-record-points-local-host-73563.html
which was to:
login to WHM and go to Main >> DNS Functions >> Edit MX Entry for the domain
set MX priority to 0 for the related domain and save.
I had the same problem after running a script to fix directory permissions on a cPanel-powered server (CentOS 6.5). I checked the logfile (tail -f /var/log/exim_mainlog) and found this error:
require_files: error for /home/user_name/etc/domain.com: Permission denied
Just ran the following command and the issue was fixed:
chown -R user_name:mail /home/user_name/etc/
Hope this helps someone.
check the the file /var/log/exim_mainlog to see more information about the error
tail -f /var/log/exim_mainlog
while trying to send email
Check your MX Entry in Cpanel, if the existing domain priority is less than or equals to 0, set it to 1. Mine is fixed. Hope it will help you.
Wow, after about an hour of searching and meddling with different files, I'd caution any novice not to venture out editing anything before you have a backup or image if your server, as you can cause irrevocable damage to your server. So many people talking garbage about what you should do or test without any real solution.
Anyways, here's what worked for me:
Real problem: Exim was updated to latest version which has loads of bugs like this issue.
How I fixed my server:
Authenticate to Linux via SSH and run the command lines through which we download and install the old version of EXIM.
Command Line 1: wget https://ca1.dynanode.net/exim-4.93-3.el7.x86_64.rpm
Command Line 2: rpm -Uvh --oldpackage exim-4.93-3.el7.x86_64.rpm
Command Line 3: systemctl restart exim
Command Line 4: Systemctl restart clamd
Command Line 5: systemctl restart spamassassin
Optional: just type "Reboot" to restart your server
The command lines above does the following:
Downloads the old package (I'm sure you can google other sources with this file)
Install the old package without prompt
Restart the Exim service
Restart the Clamd Service (AV)
Restart the spamassassin service (Spam Filter)
Restart outlook or whatever you use for mail client and send an email. Mine works, hope yours do too.