How to pass arguments to create_engine in Pylons application? - sqlalchemy

I'd like to customize database connection in Pylons application.
I'm interested in changing these arguments:
pool_size
pool_recycle
In SqlAlchemy the arguments are passed to *create_engine* call, like here:
http://docs.sqlalchemy.org/en/rel_0_9/core/engines.html?highlight=pool_size#sqlalchemy.create_engine
How can I change those params in Pylons?

you should be able to do it right in your ini file, it's been a while since I did pylons but IIRC, assuming you said yes to the sqlalchemy question when you created the project, all items prefixed with "slqalchemy." are given as keywords to the engine_from_config method in sqlalchemy. see the config/environment.py file for more.
sqlalchemy.url=<your db connection info>
sqlalchemy.pool_size= ??
sqlalchemy.pool_recycle= ??

Related

Connecting Atoti to Oracle database

I want to make a connection to an Oracle database and I have found the following method in the docs:
https://docs.atoti.io/latest/lib/atoti.store.html?highlight=jdbc#atoti.store.Store.load_sql
I call this method with something like this: my_store.load_sql(url, query, username=my_username, password=my_password)
And I use a URL with this form: 'jdbc:XX.XX.XX.XX:YYYY/ZZZZ', but I get the following error:
ValueError: No driver provided and cannot infer it from URL.
I also created this config with a path to a jdbc jar file in my SQL Developer folder, but the error persists:
my_jdbc = 'ojdbc8.jar'
tt.config.create_config(extra_jars = my_jdbc)
Does anyone know how I can solve it or have any example of loading stores from an Oracle database?
Thanks in advance.
The atoti-sql plugin comes with the Oracle driver so you don't need to add an extra jar to the config. However you do need to pass the driver when calling my_store.load_sql. These can be found in the atoti_sql.drivers module.
In your case since you are using an Oracle database, the correct code should be something like:
my_store.load_sql(
url,
query,
username=username,
password=mypassword,
driver=atoti_sql.drivers.ORACLE
)

ImportError: cannot import name 'persist'

I want to persist a trained model in CNTK and found the 'persist' functionality after some amount of searching. However, there seems to be some error in importing it.
from cntk import persist
This is throwing ImportError.
Am I doing something the wrong way? Or is this no longer supported? Is there an alternate way to persist a model?
persist is from an earlier beta. save_model is now a method of every CNTK function. So instead of doing save_model(z, filename) you do z.save_model(filename). Load_model works the same as before but you import it from cntk.ops.functions. For an example, see: https://github.com/Microsoft/CNTK/blob/v2.0.beta7.0/Tutorials/CNTK_203_Reinforcement_Learning_Basics.ipynb or https://github.com/Microsoft/CNTK/blob/v2.0.beta7.0/bindings/python/cntk/tests/persist_test.py
The functionality has moved to cntk functions. The new way is mynetwork.save_model(...) where mynetwork represents the root of your computation (typically the prediction). For loading the model you can just say mynetwork = C.load_model(...)

How to use flask-migrate with other declarative_bases

I'm trying to implement python-social-auth in Flask. I've ironed out tons of kinks whilst trying to interpret about 4 tutorials and a full Flask-book at the same time, and feel I've reached sort of an impasse with Flask-migrate.
I'm currently using the following code to create the tables necessary for python-social-auth to function in a flask-sqlalchemy environment.
from social.apps.flask_app.default import models
models.PSABase.metadata.create_all(db.engine)
Now, they're obviously using some form of their own Base, not related to my actual db-object. This in turn causes Flask-Migrate to completely miss out on these tables and remove them in migrations. Now, obviously I can remove these db-drops from every removal, but I can imagine it being one of those things that at one point is going to get forgotten about and all of a sudden I have no OAuth-ties anymore.
I've gotten this solution to work with the usage (and modification) of the manage.py-command syncdb as suggested by the python-social-auth Flask example
Miguel Grinberg, the author of Flask-Migrate replies here to an issue that seems to very closely resemble mine.
The closest I could find on stack overflow was this, but it doesn't shed too much light on the entire thing for me, and the answer was never accepted (and I can't get it to work, I have tried a few times)
For reference, here is my manage.py:
#!/usr/bin/env python
from flask.ext.script import Server, Manager, Shell
from flask.ext.migrate import Migrate, MigrateCommand
from app import app, db
manager = Manager(app)
manager.add_command('runserver', Server())
manager.add_command('shell', Shell(make_context=lambda: {
'app': app,
'db_session': db.session
}))
migrate = Migrate(app, db)
manager.add_command('db', MigrateCommand)
#manager.command
def syncdb():
from social.apps.flask_app.default import models
models.PSABase.metadata.create_all(db.engine)
db.create_all()
if __name__ == '__main__':
manager.run()
And to clarify, the db init / migrate / upgrade commands only create my user table (and the migration one obviously), but not the social auth ones, while the syncdb command works for the python-social-auth tables.
I understand from the github response that this isn't supported by Flask-Migrate, but I'm wondering if there's a way to fiddle in the PSABase-tables so they are picked up by the db-object sent into Migrate.
Any suggestions welcome.
(Also, first-time poster. I feel I've done a lot of research and tried quite a few solutions before I finally came here to post. If I've missed something obvious in the guidelines of SO, don't hesitate to point that out to me in a private message and I'll happily oblige)
After the helpful answer from Miguel here I got some new keywords to research. I ended up at a helpful github-page which had further references to, amongst others, the Alembic bitbucket site which helped immensely.
In the end I did this to my Alembic migration env.py-file:
from sqlalchemy import engine_from_config, pool, MetaData
[...]
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from flask import current_app
config.set_main_option('sqlalchemy.url',
current_app.config.get('SQLALCHEMY_DATABASE_URI'))
def combine_metadata(*args):
m = MetaData()
for metadata in args:
for t in metadata.tables.values():
t.tometadata(m)
return m
from social.apps.flask_app.default import models
target_metadata = combine_metadata(
current_app.extensions['migrate'].db.metadata,
models.PSABase.metadata)
This seems to work absolutely perfectly.
The problem is that you have two sets of models, each with a different SQLAlchemy metadata object. The models from PSA were generated directly from SQLAlchemy, while your own models were generated through Flask-SQLAlchemy.
Flask-Migrate only sees the models that are defined via Flask-SQLAlchemy, because the db object that you give it only knows about the metadata for those models, it knows nothing about these other PSA models that bypassed Flask-SQLAlchemy.
So yeah, end result is that each time you generate a migration, Flask-Migrate/Alembic find these PSA tables in the db and decides to delete them, because it does not see any models for them.
I think the best solution for your problem is to configure Alembic to ignore certain tables. For this you can use the include_object configuration in the env.py module stored in the migrations directory. Basically you are going to write a function that Alembic will call every time it comes upon a new entity while generating a migration script. The function will return False when the object in question is one of these PSA tables, and True for every thing else.
Update: Another option, which you included in the response you wrote, is to merge the two metadata objects into one, then the models from your application and PSA are inspected by Alembic together.
I have nothing against the technique of merging multiple metadata objects into one, but I think it is not a good idea for an application to track migrations in models that aren't yours. Many times Alembic will not be able to capture a migration accurately, so you may need to make minor corrections on the generated script before you apply it. For models that are yours, you are capable of detecting these inaccuracies that sometimes show up in migration scripts, but when the models aren't yours I think you can miss stuff, because you will not be familiar enough with the changes that went into those models to do a good review of the Alembic generated script.
For this reason, I think it is a better idea to use my proposed include_object configuration to leave the third party models out of your migrations. Those models should be migrated according to the third party project's instructions instead.
I use two models as following:-
One which is use using db as
db = SQLAlchemy()
app['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:' + POSTGRES_PASSWORD + '#localhost/Flask'
db.init_app(app)
class User(db.Model):
pass
the other with Base as
Base = declarative_base()
uri = 'postgresql://postgres:' + POSTGRES_PASSWORD + '#localhost/Flask'
engine = create_engine(uri)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)
session = Session()
class Address(Base):
pass
Since you created user with db.Model you can use flask migrate on User and class Address used Base which handles fetching pre-existing table from the database.

Jenkins/Hudson job parameters at runtime?

PROBLEM
Let's say I have a jenkins/hudson job (for example free-style) that takes two parameters PARAM_ONE and PARAM_TWO. Now, I do not know the values of those parameters, but I can run some script (perl/shell) to find values of those parameters and then I want the user to select from a dropdown list after which I can start the build.
Is there any way of doing that?
Sounds like you've found a plug-in that does what you need, that is pretty similar to the built-in Parameterized Builds functionality.
To answer your second question: when you define parameterized builds, the parameters are typically passed to your job as environment variables. So you'd access them however you access environment variables in your language, for instance, if you defined a parameter PARAM_ONE, you'd access it as:
In bash:
$PARAM_ONE
In Windows batch:
%PARAM_ONE%
In Python:
import os
os.getenv('PARAM_ONE')
etc.
I imagine this would be the same for the Extended Choice Parameter plugin you are using.
Just install this, and give the parameter in the build script like:
Windows
"your build script" %PARAMONE% %PARAMTWO%
In Java, you can access these parameters off the run object
EnvVars envVars = new EnvVars();
envVars = run.getEnvironment(listener);
for (String envName2 : envVars.keySet()) {
listener.getLogger().println(envName2 + " = " + envVars.get(envName2));
}

Change Entity Framework database schema map after using code first

I've finished building my blog using EF and Code First.
EF was running against my local SQL Express instance, with [DBO] schema.
Now i want to publish the blog, and i have done the following :
Generetade the scripts for the tables and all objects from SQL Express and change [dbo] to my [administrator] schema from my server.
Ran the scripts against the server. No issues, all objects were created an populated just fine.
I have modified Webconfig and added my BlogContext connection string to point to the server not local sql express.
Published the site.
The error i am getting is : Invalid object name 'dbo.Articles'. - where Articles is one of my entities. It resides on my sql server, [Administrator].Articles.
As far as i can tell EF still thinks im using the DBO schema. Although i have added the connection string to point to administrator user.
How can i change the schema that EF thinks it should use?
EF will use dbo schema if you didn't configure the schema explicitly through data annotations or fluent API.
[Table("MyTable", "MySchema")]
public class MyEntity
{
}
Or
modelBuidler.Entity<MyEntity>().ToTable("MyTable", "MySchema");
Just for searchers: I am just working with EF5 .NET4.5, and
[Table("MyTable", "MySchema")]
does not work. Even if VS2012 shows there is an overload which takes 2 parameters, on build it gives the error: 'System.ComponentModel.DataAnnotations.Schema.TableAttribute' does not contain a constructor that takes 2 arguments.
But the code mapping works just fine.