I have just started looking at Alembic, and coming from Django, where we have South to migrate our database schemas (which is soon to be included) which uses a friendly old fixed-width number like 0037_fix_my_schema.py to talk about the order in which migrations are to be applied, I am naturally intrigued by Alembic's revision ID. Is there a DAG backing Alembic, or can someone give a little overview of its internals in this respect?
I took a look myself. The source says:
def rev_id():
val = int(uuid.uuid4()) % 100000000000000
return hex(val)[2:-1]
Not so fascinating.
Related
I have a Pylons project and a SQLAlchemy model that implements schema qualified tables:
class Hockey(Base):
__tablename__ = "hockey"
__table_args__ = {'schema':'winter'}
hockey_id = sa.Column(sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True)
baseball_id = sa.Column(sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id'))
This code works great with Postgresql but fails when using SQLite on table and foreign key names (due to SQLite's lack of schema support)
sqlalchemy.exc.OperationalError: (OperationalError) unknown database "winter" 'PRAGMA "winter".table_info("hockey")' ()
I'd like to continue using SQLite for dev and testing.
Is there a way of have this fail gracefully on SQLite?
I'd like to continue using SQLite for
dev and testing.
Is there a way of have this fail
gracefully on SQLite?
It's hard to know where to start with that kind of question. So . . .
Stop it. Just stop it.
There are some developers who don't have the luxury of developing on their target platform. Their life is a hard one--moving code (and sometimes compilers) from one environment to the other, debugging twice (sometimes having to debug remotely on the target platform), gradually coming to an awareness that the gnawing in their gut is actually the start of an ulcer.
Install PostgreSQL.
When you can use the same database environment for development, testing, and deployment, you should.
Not to mention the QA team. Why on earth are they testing stuff they're not going to ship? If you're deploying on PostgreSQL, assure the quality of your work on PostgreSQL.
Seriously.
I'm not sure if this works with foreign keys, but someone could try to use SQLAlchemy's Multi-Tenancy Schema Translation for Table objects. It worked for me but I have used custom primaryjoin and secondaryjoinexpressions in combination with composite primary keys.
The schema translation map can be passed directly to the engine creator:
...
if dialect == "sqlite":
url = lambda: "sqlite:///:memory:"
execution_options={"schema_translate_map": {"winter": None, "summer": None}}
else:
url = lambda: f"postgresql://{user}:{pass}#{host}:{port}/{name}"
execution_options=None
engine = create_engine(url(), execution_options=execution_options)
...
Here is the doc for create_engine. There is a another question on so which might be related in that regard.
But one might get colliding table names all schema names are mapped to None.
I'm just a beginner myself, and I haven't used Pylons, but...
I notice that you are combining the table and the associated class together. How about if you separate them?
import sqlalchemy as sa
meta = sa.MetaData('sqlite:///tutorial.sqlite')
schema = None
hockey_table = sa.Table('hockey', meta,
sa.Column('score_id', sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True),
sa.Column('baseball_id', sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id')),
schema = schema,
)
meta.create_all()
Then you could create a separate
class Hockey(Object):
...
and
mapper(Hockey, hockey_table)
Then just set schema above = None everywhere if you are using sqlite, and the value(s) you want otherwise.
You don't have a working example, so the example above isn't a working one either. However, as other people have pointed out, trying to maintain portability across databases is in the end a losing game. I'd add a +1 to the people suggesting you just use PostgreSQL everywhere.
HTH, Regards.
I know this is a 10+ year old question, but I ran into the same problem recently: Postgres in production and sqlite in development.
The solution was to register an event listener for when the engine calls the "connect" method.
#sqlalchemy.event.listens_for(engine, "connect")
def connect(dbapi_connection, connection_record):
dbapi_connection.execute('ATTACH "your_data_base_name.db" AS "schema_name"')
Using ATTACH statement only once will not work, because it affects only a single connection. This is why we need the event listener, to make the ATTACH statement over all connections.
I have got an object detection which reads an input image and runs the inference then it outputs the classIDs[] (class name) and confidence levels of the detected object confidences[].
In case one has never worked with ZeroMQ,one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds"before diving into further details
Q : Could you please tell me how can I communicate the output ... from deep learning system?
May use a socket.send( pickle.dumps( [ classIDs[i], confidences[i], ] ) )
Both the first O/P-topic creeping comment, posted 15 minutes after this answer did answer the O/P-problem definition (and was deleted later) and also the second O/P-topic creeping comment, posted about an hour after a due answer was in-place, did not change the game :
whatever you try to pass over the ZeroMQ channel has to be SER/DES-handled. If willing to make things complex, ok, it still goes the same way :
socket.send( pickle.dumps( <whateverBLOBneeded> ) )
If starting to have new problems, due to SER/DES-collisions ( as object-instances and Class()-es have so often in attempts to have 'em pickle'd ), feel free to try to salvage the so often Exceptions "vomiting" pickle module with :import dill as pickle a smarter SER/DES dill module from Mike McKerns
and,
again the rest goes the same way :
socket.send( pickle.dumps( <whateverBLOBneeded> ) )
A bonus part
May rather want to prototype with PUSH/PULL it does not block in a mutual deadlock as all REQ/REP do.
For some reason order_by() is not working for me on a queryset. I've tried everything I can think of, but my Django/MySQL installation doesn't seem to be doing anything with order_by() method. The list appears to just remain in a fairly unordered state, or is ordered on some basis I cannot see.
My Django installation is 1.8.
An example of one of my models is as follows:
class PositiveTinyIntegerField(models.PositiveSmallIntegerField):
def db_type(self, connection):
if connection.settings_dict['ENGINE'] == 'django.db.backends.mysql':
return "tinyint unsigned"
else:
return super(PositiveTinyIntegerField, self).db_type(connection)
class School(models.Model):
school_type = models.CharField(max_length=40)
order = PositiveTinyIntegerField(default=1)
# Make the identity of db rows clear in admin
def __str__(self):
return self.school_type
And here is the the relevant line from my view:
schools = School.objects.order_by('order')
At first I thought the problem was related to having used the non-standard PositiveTinyIntegerField() defined by a class I found on a website somewhere which allows me to use the MySQL Tiny Integer field. However, when I ordered by 'id', or 'school_type' the list still remained in an order that appeared fairly random to my eye.
I could put in my own loop which orders the queryset after it has been retrieved, but I'd really rather solve this issue so I can use the standard Django way of doing it.
I hope someone can see where the issue may be coming from.
I managed to resolved it with some help from the comments here. I tried writing the school object to stdout using sys.write.stdout(str(school)). The logs then showed me that in fact the data was being ordered correctly, so the problem had to be with how the data was being packaged before being rendered by the template.
I wrote the view some time ago before I decided I wanted it ordered, so it turned out the problem was caused by each school object (with an attached tree of related data) being read into a dictionary. Once I changed the data type to the list, the schools then rendered in my intended order.
The graphite-webapp does not encourage ad-hoc graphing. Graphiti et al are just fancy UIs that, while improve UI-UX, do not do much regarding the inherent linear metric search that plagues the graphite-webapp. Correct me if wrong here, but the only option I came across that encourages ad-hoc graphing has been Graph-Explorer. Assuming, that Graph-Explorer is the only way ahead.
I have some 1000 distinct metrics currently. Named in the following fashion-
stats.beta.pluto.ip-10-0-1-81.helios.pa.v4.reminder.total
stats.beta.pluto.ip-10-0-1-81.helios.pa.v4.reminder.failed
stats.beta.pluto.ip-10-0-1-81.helios.pa.v4.reminder.delivered
stats.dev.ganglia.ip-10-0-3-40.ink.web.pi.notification.android.total
stats.dev.ganglia.ip-10-0-3-40.ink.web.pi.notification.android.failed
stats.dev.ganglia.ip-10-0-3-40.ink.web.pi.notification.android.delivered
I understand that these will become-
metric=stats.env=dev.role=ganglia.server=ip-10-0-3-40. application=ink.endpoint=web.src=pi.metric=notification.what=total
Where do I insert unit and target_type tags?
Similarly, I have 500 timers.
How do I go about migrating from 'proto1' to 'proto2'?
Also where exactly does Carbon-Tagger come into the stack?
Do I rename my metrics at the source level?
Do I modify the structured_metrics/plugins/statsd.py file as we have fixed hierarchy across our distributed infrastructure?
Anything I am missing?
What will I have to change in my statsd? I quote the carbon-tagger documentation- "aggregators like statsd will need proto2 support."
the structured metrics plugins will set the tags for proto1 ("old style") metrics, see https://github.com/vimeo/graph-explorer/wiki/Structured-Metrics
if you want to stick to proto1 you just have to create a plugin to tag your metrics see https://github.com/vimeo/graph-explorer/wiki/Structured-Metrics#writing-your-own-plugins and existing plugins for examples
you can basically ignore carbon-tagger if you want to stick with proto1, so 3 is not needed, but otherwise yes. the statsd plugin just converts statsd's internal metrics to proto2.
I am doing a Rails 3 app that replaces a paper form for a company. The paper form spans two pages and contains a LOT of fields, checkboxes, drop downs, etc.
I am wondering how to model that in the DB - one approach is to just create a field in the DB for every field on the form (normalized of course). That will make it somewhat difficult to ad or remove fileds since a migration will be needed. An other approach is to do some kind of key/value store (no - MongoDB/CouchDB is not an option - MySQL is required). Doing key/value will be very flexible but will be a pain to query. And it will directly work against ActiveRecord?
Anyone have a great solution for this?
Regards,
Jacob
I would recommend that you model the most common attributes as separate database fields. Once you have setup as many fields as possible then fall back to using a key-value setup for your pseudo-random attributes. I'd recommend a simple approach of storing a Hash through the ActiveRecord method serialize. For example:
class TPS < ActiveRecord::Base
serialize :custom, Hash
end
#tps = TPS.create(:name => "Kevin", :ssn => "123-456-789", :custom => { :abc => 'ABC', :def => )'DEF' })
#tps.name # Kevin
#tps.ssn # 123-456-789
#tps.custom[:abc] # ABC
#tps.custom[:def] # DEF
If your form is fairly static, go ahead and make a model for it, that's a reasonable approach even if it seems rather rudimentary. It's not your fault the form is so complicated, you're just coming up with a solution that takes that into account. Migrations to make adjustments to this are really simple to implement and easy to understand.
Splitting it up into a key/value version would be better but would take a lot more engineering. If you expect that this form will be subject to frequent and radical revisions it may make more sense to build for the future in this regard. You can see an example of the sort of form-builder you might want to construct at something like WuFoo but of course building form builders is not to be taken lightly.