I'm new to python, trying to learn and use Fastapi, Ariadne and SQLAlchemy. I'm following the docs and I'm stuck.
I have my dependency injection get_db() for normal REST requests, which provides a Session object, which I pass through few different modules from the request start until I actually do the db work, and honestly I don't get this design. But I left it there.
Then, another problem come up. How can I pass db to Ariadne GraphQL? Should I use context_value or are there any other options?
You need to create a session of SQLAlchemy from ariadne resolvers. And don't forget to close the connection after resolvers finished.
Let's say your database file like following,
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
SQLALCHEMY_DATABASE_URL = os.getenv("DB_CONN_STRING")
engine = create_engine(SQLALCHEMY_DATABASE_URL)
LocalSession = sessionmaker(autocommit=False, autoflush=False, bind=engine)
def get_db():
db = None
try:
db = LocalSession()
yield db
finally:
if db:
db.close()
You can do like following on the resolvers
# Import the local session from local database file
from app.database import LocalSession
db = LocalSession()
# Do whatever you need to do
db.close()
Is there any way to take the classes that are created by SQLAlchemy's automap (reflected from an existing database) and write these to a models.py file (for use with Flask and a different database)?
Base = automap_base()
Base.prepare(engine, reflect=True)
dir(Base.classes)
...
In Django, this is possible with the inspectdb command:
$ python manage.py inspectdb > models.py
In order to perform a migration, through commands flask db init and flask db migrate, I receive the following error: directory = current_app.extensions['migrate'].directory KeyError: 'migrate'.
I have created a Migrate object within my __init__.py file, however, still receive the error stated above:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from config import app_config
from flask_login import LoginManager
from flask_migrate import Migrate
# Creating Flask app.
app = Flask(__name__)
# Database variable initialisation.
db = SQLAlchemy()
login_manager = LoginManager()
def create_app(config_name):
app = Flask(__name__, instance_relative_config=True)
app.config.from_object(app_config[config_name])
app.config.from_pyfile('config.py')
db.init_app(app)
# Creating login manager object and initialising it.
# Login view and message prevent a user to accessing a page they are not authorised to.
login_manager.init_app(app)
login_manager.login_message = "Please login to access this page."
login_manager.login_view = "auth.login"
migrate = Migrate(app, db)
from app import models
return app
Any advice for what I am doing wrong? I have already viewed quite a few pages related to this error and have implemented the possible alterations.
I found the problem. The above __init__.py class was completely fine.
My config.py file which contained the following code:
SQLALCHEMY_DATABASE_URI = 'mysql+mysqlconnector://{db_user}:{my_password}#localhost/{database}'
... had a format extension to it which was not needed. I have successfully updated my database.
I need to mount the azure file storage to Linux-Pools when they are being spun-up.I am following the instructions given here to achieve that: mounting Azure-File Storage to Batch Specically in my Azure CLI script under the Pools start commands I am inserting something which looks like this
--start-task-command-line="apt-get update && apt-get install cifs-utils && mkdir -p {} && mount -t cifs {} {} -o vers=3.0,username={},password={},dir_mode=0777,file_mode=0777,serverino".format(_COMPUTE_NODE_MOUNT_POINT, _STORAGE_ACCOUNT_SHARE_ENDPOINT, _COMPUTE_NODE_MOUNT_POINT, _STORAGE_ACCOUNT_NAME, _STORAGE_ACCOUNT_KEY)
but when I run the tasks with the auto-user that batch uses by default I get an error in the stderr.txt file mentioning that it was unable to create the "/mnt/MyAzureFileshare" directory and so my guess is the mounting didn't occur during the pool creation process.I saw a very similar question to the one I am facing:setting custom user identity for tasks and even the official Microsoft documentation goes over this in detail:Run Tasks under User accounts in Batch but none of them put a light on how to achieve this using Azure CLI.
In order to install specific packages so that Azure File Storage can be mounted requires sudo privileges and I am unable to do that through the Azure-CLI. In order to recreate the error I would recommend having a look at this:app to replicate the issue
What I want to achieve is:
1) Create a Pool with the Azure-File Storage mounted on it and elevate the privileges of the auto-user to the admin level using Azure CLI
2) Run tasks with the same auto-user with Admin Privileges using the azure CLI
Update 1:
I was able to mount Azure File Storage with Batch using the Azure CLI. I still am not able to populate the Azure File Storage with the output files of the app that I deployed on Batch Nodes.I have got no error in the stderr.txt files.
The output of the stderr.txt file is:
WARNING: In "login" auth mode, the following arguments are ignored: --account-key
Alive[################################################################] 100.0000%
Finished[#############################################################] 100.0000%
pdf--->png: 0%| | 0/1 [00:00<?, ?it/s]
pdf--->png: 100%|##########| 1/1 [00:00<00:00, 1.16it/s]WARNING: In "login" auth mode, the following arguments are ignored: --account-key
WARNING: uploading /mnt/batch/tasks/workitems/pdf-processing-job-2018-10-29-15-36-15/job-1/mytask-0/wd/png_files-2018-10-29-15-39-25/akronbeaconjournal_20180108_AkronBeaconJournal_0___page---0.png
Alive[################################################################] 100.0000%
Finished[#############################################################] 100.0000%
The Python App that was deployed on the Batch Nodes is:
import os
import fitz
import subprocess
import argparse
import time
from tqdm import tqdm
import sentry_sdk
import sys
import datetime
def azure_active_directory_login(azure_username,azure_password,azure_tenant):
try:
azure_login_output=subprocess.check_output(["az","login","--service-principal","--username",azure_username,"--password",azure_password,"--tenant",azure_tenant])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("Invalid Azure Login Credentials")
sys.exit("Invalid Azure Login Credentials")
def download_from_azure_blob(azure_storage_account,azure_storage_account_key,input_azure_container,file_to_process,pdf_docs_path):
file_to_download=os.path.join(input_azure_container,file_to_process)
try:
subprocess.check_output(["az","storage","blob","download","--container-name",input_azure_container,"--file",os.path.join(pdf_docs_path,file_to_process),"--name",file_to_process,"--account-key",azure_storage_account_key,\
"--account-name",azure_storage_account,"--auth-mode","login"])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("unable to download the pdf file")
sys.exit("unable to download the pdf file")
def pdf_to_png(input_folder_path,output_folder_path):
pdf_files=[x for x in os.listdir(input_folder_path) if x.endswith((".pdf",".PDF"))]
pdf_files.sort()
for pdf in tqdm(pdf_files,desc="pdf--->png"):
doc=fitz.open(os.path.join(input_folder_path,pdf))
page_count=doc.pageCount
for f in range(page_count):
page=doc.loadPage(f)
pix = page.getPixmap()
if pdf.endswith(".pdf"):
png_filename=pdf.split(".pdf")[0]+"___"+"page---"+str(f)+".png"
pix.writePNG(os.path.join(output_folder_path,png_filename))
elif pdf.endswith(".PDF"):
png_filename=pdf.split(".PDF")[0]+"___"+"page---"+str(f)+".png"
pix.writePNG(os.path.join(output_folder_path,png_filename))
def upload_to_azure_blob(azure_storage_account,azure_storage_account_key,output_azure_container,png_docs_path):
try:
subprocess.check_output(["az","storage","blob","upload-batch","--destination",output_azure_container,"--source",png_docs_path,"--account-key",azure_storage_account_key,\
"--account-name",azure_storage_account,"--auth-mode","login"])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("Unable to upload file to the container")
def upload_to_fileshare(png_docs_path):
try:
subprocess.check_output(["cp","-r",png_docs_path,"/mnt/MyAzureFileShare/"])
except subprocess.CalledProcessError:
sentry_sdk.capture_message("unable to upload to azure file share ")
if __name__=="__main__":
#Credentials
sentry_sdk.init("<Sentry Creds>")
azure_username=<azure_username>
azure_password=<azure_password>
azure_tenant=<azure_tenant>
azure_storage_account=<azure_storage_account>
azure_storage_account_key=<azure_account_key>
try:
parser = argparse.ArgumentParser()
parser.add_argument("input_azure_container",type=str,help="Location to download files from")
parser.add_argument("output_azure_container",type=str,help="Location to upload files to")
parser.add_argument("file_to_process",type=str,help="file link in azure blob storage")
args = parser.parse_args()
timestamp = time.time()
timestamp_humanreadable= datetime.datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d-%H-%M-%S')
task_working_dir=os.getcwd()
file_to_process=args.file_to_process
input_azure_container=args.input_azure_container
output_azure_container=args.output_azure_container
pdf_docs_path=os.path.join(task_working_dir,"pdf_files"+"-"+timestamp_humanreadable)
png_docs_path=os.path.join(task_working_dir,"png_files"+"-"+timestamp_humanreadable)
os.mkdir(pdf_docs_path)
os.mkdir(png_docs_path)
except Exception as e:
sentry_sdk.capture_exception(e)
azure_active_directory_login(azure_username,azure_password,azure_tenant)
download_from_azure_blob(azure_storage_account,azure_storage_account_key,input_azure_container,file_to_process,pdf_docs_path)
pdf_to_png(pdf_docs_path,png_docs_path)
upload_to_azure_blob(azure_storage_account,azure_storage_account_key,output_azure_container,png_docs_path)
upload_to_fileshare(png_docs_path)
The upload_to_fileshare() in the python app above should initiate the upload but in my case nothing happens and there is no error in the copy operation in the stderr.txt files
Please let me know a way to troubleshoot this issue
It does not look like the run elevated parameter is exposed via a command line argument through the CLI. You can however specify a JSON file to the --json argument formatted as the REST API object to get all functionalities.
I have been using mysql in a new rails application, but now I wanted to give mongoDB a try so I installed mongo mapper and mongoid (to use mongo session). The installation seems to be fine because I can create mongo models. But for some reason rails is still trying to connect to mysql: Can't connect to local MySQL server.
This is horrible, because even if I wasn't using mongo, rails shouldn't be trying to connect to mysql for every request. It's throwing that error even for non-existent urls.
What can I do to debug this? I guess I could try removing the mysql gem from the Gemfile and running bundle install. But I still don't like the fact that it's trying to connect even when I'm not using it. Shouldn't it try to connect 'lazily' (ie: only on demand)?
development.rb:
Myapp::Application.configure do
# Settings specified here will take precedence over those in config/application.rb
# In the development environment your application's code is reloaded on
# every request. This slows down response time but is perfect for development
# since you don't have to restart the web server when you make code changes.
config.cache_classes = false
# Log error messages when you accidentally call methods on nil.
config.whiny_nils = true
# Show full error reports and disable caching
config.consider_all_requests_local = true
config.action_controller.perform_caching = false
# Don't care if the mailer can't send
config.action_mailer.raise_delivery_errors = false
# Print deprecation notices to the Rails logger
config.active_support.deprecation = :log
# Only use best-standards-support built into browsers
config.action_dispatch.best_standards_support = :builtin
# Raise exception on mass assignment protection for Active Record models
config.active_record.mass_assignment_sanitizer = :strict
# Log the query plan for queries taking more than this (works
# with SQLite, MySQL, and PostgreSQL)
config.active_record.auto_explain_threshold_in_seconds = 0.5
# Do not compress assets
config.assets.compress = false
# Expands the lines which load the assets
config.assets.debug = true
end
application.rb:
require File.expand_path('../boot', __FILE__)
require 'rails/all'
if defined?(Bundler)
# If you precompile assets before deploying to production, use this line
Bundler.require(*Rails.groups(:assets => %w(development test)))
# If you want your assets lazily compiled in production, use this line
# Bundler.require(:default, :assets, Rails.env)
end
module Myapp
class Application < Rails::Application
# Settings in config/environments/* take precedence over those specified here.
# Application configuration should go into files in config/initializers
# -- all .rb files in that directory are automatically loaded.
# Custom directories with classes and modules you want to be autoloadable.
# config.autoload_paths += %W(#{config.root}/extras)
# Only load the plugins named here, in the order given (default is alphabetical).
# :all can be used as a placeholder for all plugins not explicitly named.
# config.plugins = [ :exception_notification, :ssl_requirement, :all ]
# Activate observers that should always be running.
# config.active_record.observers = :cacher, :garbage_collector, :forum_observer
# Set Time.zone default to the specified zone and make Active Record auto-convert to this zone.
# Run "rake -D time" for a list of tasks for finding time zone names. Default is UTC.
# config.time_zone = 'Central Time (US & Canada)'
# The default locale is :en and all translations from config/locales/*.rb,yml are auto loaded.
# config.i18n.load_path += Dir[Rails.root.join('my', 'locales', '*.{rb,yml}').to_s]
# config.i18n.default_locale = :de
# Configure the default encoding used in templates for Ruby 1.9.
config.encoding = "utf-8"
# Configure sensitive parameters which will be filtered from the log file.
config.filter_parameters += [:password]
# Use SQL instead of Active Record's schema dumper when creating the database.
# This is necessary if your schema can't be completely dumped by the schema dumper,
# like if you have constraints or database-specific column types
# config.active_record.schema_format = :sql
# Enforce whitelist mode for mass assignment.
# This will create an empty whitelist of attributes available for mass-assignment for all models
# in your app. As such, your models will need to explicitly whitelist or blacklist accessible
# parameters by using an attr_accessible or attr_protected declaration.
# config.active_record.whitelist_attributes = true
# Enable the asset pipeline
config.assets.enabled = true
# Version of your assets, change this if you want to expire all your assets
config.assets.version = '1.0'
config.generators do |g|
g.orm :mongo_mapper
end
end
end
When ActiveRecord is part of application, it tries to establish connection to database at startup. If it fails to connect, application won't start.
The problem is here:
require 'rails/all'
This line includes all "usual" rails components, ActiveRecord among them. If you go to its definition, it should look like this (for rails 3.2):
require "rails"
%w(
active_record
action_controller
action_mailer
active_resource
rails/test_unit
sprockets
).each do |framework|
begin
require "#{framework}/railtie"
rescue LoadError
end
end
Take this code, remove active_record line and put it instead your rails/all line. Now, ActiveRecord isn't included and your application will loudly fail when it sees ActiveRecord references in the code, like this:
config.active_record.mass_assignment_sanitizer = :strict
You need to remove these too. You don't need to delete database.yml, but you probably should, since it has no meaning now.
You need delete the line where you configure activerecord :
# Raise exception on mass assignment protection for Active Record models
config.active_record.mass_assignment_sanitizer = :strict
# Log the query plan for queries taking more than this (works
# with SQLite, MySQL, and PostgreSQL)
config.active_record.auto_explain_threshold_in_seconds = 0.5
i presume you've checked your database.yml to make sure nothing is looking for adapter: mysql?
Also if you've left mysql in your Gemfile then the mysql gem will bomb out as it can't do it's part and therefore has to fail as a dependency. Remove it + rerun.