I have a little problem when I want to save entity in other Thread. e.g.
Executors.newCachedThreadPool().submit(() -> userService.save(user))
and in userService I have a lot of logic and at the end I want to save user into database using userRepository.save(user) and repo doesn't save it. What am I doing wrong? If someone want more info let me know
Related
I'm trying to implement python-social-auth in Flask. I've ironed out tons of kinks whilst trying to interpret about 4 tutorials and a full Flask-book at the same time, and feel I've reached sort of an impasse with Flask-migrate.
I'm currently using the following code to create the tables necessary for python-social-auth to function in a flask-sqlalchemy environment.
from social.apps.flask_app.default import models
models.PSABase.metadata.create_all(db.engine)
Now, they're obviously using some form of their own Base, not related to my actual db-object. This in turn causes Flask-Migrate to completely miss out on these tables and remove them in migrations. Now, obviously I can remove these db-drops from every removal, but I can imagine it being one of those things that at one point is going to get forgotten about and all of a sudden I have no OAuth-ties anymore.
I've gotten this solution to work with the usage (and modification) of the manage.py-command syncdb as suggested by the python-social-auth Flask example
Miguel Grinberg, the author of Flask-Migrate replies here to an issue that seems to very closely resemble mine.
The closest I could find on stack overflow was this, but it doesn't shed too much light on the entire thing for me, and the answer was never accepted (and I can't get it to work, I have tried a few times)
For reference, here is my manage.py:
#!/usr/bin/env python
from flask.ext.script import Server, Manager, Shell
from flask.ext.migrate import Migrate, MigrateCommand
from app import app, db
manager = Manager(app)
manager.add_command('runserver', Server())
manager.add_command('shell', Shell(make_context=lambda: {
'app': app,
'db_session': db.session
}))
migrate = Migrate(app, db)
manager.add_command('db', MigrateCommand)
#manager.command
def syncdb():
from social.apps.flask_app.default import models
models.PSABase.metadata.create_all(db.engine)
db.create_all()
if __name__ == '__main__':
manager.run()
And to clarify, the db init / migrate / upgrade commands only create my user table (and the migration one obviously), but not the social auth ones, while the syncdb command works for the python-social-auth tables.
I understand from the github response that this isn't supported by Flask-Migrate, but I'm wondering if there's a way to fiddle in the PSABase-tables so they are picked up by the db-object sent into Migrate.
Any suggestions welcome.
(Also, first-time poster. I feel I've done a lot of research and tried quite a few solutions before I finally came here to post. If I've missed something obvious in the guidelines of SO, don't hesitate to point that out to me in a private message and I'll happily oblige)
After the helpful answer from Miguel here I got some new keywords to research. I ended up at a helpful github-page which had further references to, amongst others, the Alembic bitbucket site which helped immensely.
In the end I did this to my Alembic migration env.py-file:
from sqlalchemy import engine_from_config, pool, MetaData
[...]
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from flask import current_app
config.set_main_option('sqlalchemy.url',
current_app.config.get('SQLALCHEMY_DATABASE_URI'))
def combine_metadata(*args):
m = MetaData()
for metadata in args:
for t in metadata.tables.values():
t.tometadata(m)
return m
from social.apps.flask_app.default import models
target_metadata = combine_metadata(
current_app.extensions['migrate'].db.metadata,
models.PSABase.metadata)
This seems to work absolutely perfectly.
The problem is that you have two sets of models, each with a different SQLAlchemy metadata object. The models from PSA were generated directly from SQLAlchemy, while your own models were generated through Flask-SQLAlchemy.
Flask-Migrate only sees the models that are defined via Flask-SQLAlchemy, because the db object that you give it only knows about the metadata for those models, it knows nothing about these other PSA models that bypassed Flask-SQLAlchemy.
So yeah, end result is that each time you generate a migration, Flask-Migrate/Alembic find these PSA tables in the db and decides to delete them, because it does not see any models for them.
I think the best solution for your problem is to configure Alembic to ignore certain tables. For this you can use the include_object configuration in the env.py module stored in the migrations directory. Basically you are going to write a function that Alembic will call every time it comes upon a new entity while generating a migration script. The function will return False when the object in question is one of these PSA tables, and True for every thing else.
Update: Another option, which you included in the response you wrote, is to merge the two metadata objects into one, then the models from your application and PSA are inspected by Alembic together.
I have nothing against the technique of merging multiple metadata objects into one, but I think it is not a good idea for an application to track migrations in models that aren't yours. Many times Alembic will not be able to capture a migration accurately, so you may need to make minor corrections on the generated script before you apply it. For models that are yours, you are capable of detecting these inaccuracies that sometimes show up in migration scripts, but when the models aren't yours I think you can miss stuff, because you will not be familiar enough with the changes that went into those models to do a good review of the Alembic generated script.
For this reason, I think it is a better idea to use my proposed include_object configuration to leave the third party models out of your migrations. Those models should be migrated according to the third party project's instructions instead.
I use two models as following:-
One which is use using db as
db = SQLAlchemy()
app['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:' + POSTGRES_PASSWORD + '#localhost/Flask'
db.init_app(app)
class User(db.Model):
pass
the other with Base as
Base = declarative_base()
uri = 'postgresql://postgres:' + POSTGRES_PASSWORD + '#localhost/Flask'
engine = create_engine(uri)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)
session = Session()
class Address(Base):
pass
Since you created user with db.Model you can use flask migrate on User and class Address used Base which handles fetching pre-existing table from the database.
So my scenario drilled down to the essence is as follows:
Essentially, I have a config file containing a set of SQL queries whose result sets need to be exported as CSV files.
Since some queries may return billions of rows, and because something may interrupt the process (bug, crash, ...), I want to use a framework such as spring batch, which gives me restartabilty and job monitoring.
I am using a file based H2 database for persisting spring batch jobs.
So, here are my questions:
Upon creating a Job, I need to provide my RowMapper some initial configuration. So what happens when a job needs to be restarted after a e.g. crash? Concretly:
Is the state of the RowMapper automatically persisted, and upon restart Spring batch will try to restore the object from its database, or
will the RowMapper object be used that is part of the original spring batch XML config file, or
I have to maintain the RowMapper's state using the step's/job's ExecutionContext?
Above question is related to whether there is magic going on when using the spring batch XML configuration, or whether I could as well create all these beans in a programmatic way:
Since I need to parse my own config format into a spring batch job config, I rather just use spring batch's Java classes (beans) and fill them out appropriately, rather attempting to manually write out valid XML. However, if my Job crashes, I would create all the beans myself again. Does spring batch automagically restore the Job state from its database?
If I really need XML, is there a way to serialize a spring-batch JobRepository (or one of these objects) as a spring batch XML config?
Right now, I tried to configure my Step with the following code - but I am unsure if this is the proper way to do this:
Is TaskletStep the way to go?
Is the way I create the chunked reader/writer correct, or is there some other object which I should use instead?
I would have assumed that opening of the reader and writer would occur automatically as part of the JobExecution, but if I don't open these resources prior to running the Job, I get an exception telling me that I need to open them first. Maybe I need to create some other object that manages the resoures (jdbc connection and file handle)?
JdbcCursorItemReader<Foobar> itemReader = new JdbcCursorItemReader<Foobar>();
itemReader.setSql(sqlStr);
itemReader.setDataSource(dataSource);
itemReader.setRowMapper(rowMapper);
itemReader.afterPropertiesSet();
ExecutionContext executionContext = new ExecutionContext();
itemReader.open(executionContext);
FlatFileItemWriter<String> itemWriter = new FlatFileItemWriter<String>();
itemWriter.setLineAggregator(new PassThroughLineAggregator<String>());
itemWriter.setResource(outResource);
itemWriter.afterPropertiesSet();
itemWriter.open(executionContext);
int commitInterval = 50000;
CompletionPolicy completionPolicy = new SimpleCompletionPolicy(commitInterval);
RepeatTemplate repeatTemplate = new RepeatTemplate();
repeatTemplate.setCompletionPolicy(completionPolicy);
RepeatOperations repeatOperations = repeatTemplate;
ChunkProvider<Foobar> chunkProvider = new SimpleChunkProvider<Foobar>(itemReader, repeatOperations);
ItemProcessor<Foobar, String> itemProcessor = new ItemProcessor<Foobar, String>() {
/* Custom implemtation */ };
ChunkProcessor<Foobar> chunkProcessor = new SimpleChunkProcessor<Foobar, String>(itemProcessor, itemWriter);
Tasklet tasklet = new ChunkOrientedTasklet<QuadPattern>(chunkProvider, chunkProcessor); //new SplitFilesTasklet();
TaskletStep taskletStep = new TaskletStep();
taskletStep.setName(taskletName);
taskletStep.setJobRepository(jobRepository);
taskletStep.setTransactionManager(transactionManager);
taskletStep.setTasklet(tasklet);
taskletStep.afterPropertiesSet();
job.addStep(taskletStep);
Most of you questions are really complex and can be difficult give a good answer without write a long paper.
I'm new with spring-batch as you, and I found a lot of really useful info - and all the answers to your questions - reading Spring batch in action: it's completed, well explained, full of example and cover all aspects of framework (reader/writer/processor, job/tasklet/chunk lifecycle/persistence, tx/resources management, job flow, integration with other service, partitioning, restarting/retry, failure management and a lot of interesting things).
Hope to help
I just found a strange problem in Hibernate.
In My Java EE web project within Hibernate framework and json-plugin. My code like this
private User user;
get(),set()....
public String getUser(){
if(findUser(...) != null){
user = findUser(...);
user.setPasssword("")//!important for the purpose of does not transmit the password to the front
return "success";
} else {
return "error";
}
}
the problem appeared when the code executed the User's password in database be cleared, I'm sure any update and insert function dosen't be triggered.
I want to know why? who can figure it out and thanks!
That's the base principle of an ORM like Hibernate: you manipulate objects mapped to database tables, and attached to a persistent session, and every changes you make on these objects are automatically recorded, transparently, in the database.
If you want your changes to the User object not recorded in the database, you need to first detach the object from the Hibernate session, using session.evict(user).
You don't seem to have grasped basic (and very important) principles of Hibernate. Read its excellent documentation.
We're running into a small problem deploying a web application to another environment.
We created the application's db using Entity Framework Code First approach (db automatic created from Model).
In this development environment, we are using integrated security and the tables are created under the dbo user. The tables are like
[dbo].[myTable]
For our other environment, we are using username/password authentication for the DB.
We scripted the tables and created them on the DB. So they are now named like
[myDbUser].[myTable]
When running the application, we encounter always the problem
Invalid object name 'dbo.myTable'.
Seems like the code is still trying to look for a dbo table, which is not present and thus fails.
Can anyone shed some light on this problem? Where does Entity Framework gets this dbo prefix from?
Thanks
Specify schema explicitly:
[Table("Users", Schema = "dbo")]
public class User { .. }
Or specify default db schema for your user - 'dbo'
To specify schema in fluent
protected override void OnModelCreating(DbModelBuilder modelBuilder)
modelBuilder.Entity<ClassName>().ToTable("TableName", "SchemaName");
I ran into this issue recently as well as we support several different schemas with the same model. What I basically came up with was the passing the schema name to the classes/methods that map the model. So for example, EntityTypeConfiguration subclasses take the schema name as a constructor argument, and pass it along with the hard-coded string to ToTable().
See here for a more detailed explanation: https://stackoverflow.com/a/14782001/243607
The environment of my application: web-based, Spring MVC+Security, Hibernate, MySQL(InnoDB)
I am working on a small database application operated from a web interface. There are specific and known users that handle the stored data. Now I need to keep track of every create/update/delete action a user executes on the database and produce simple, "list-like" reports from this. As of now, I am thinking of a "log" table (columns: userId + timestamp + description etc.). I guess an aspect could be fired upon any C(R)UD operation inserting a log row in this table. But I am not sure this is how it should be done.
I am also aware of the usual MySQL logs as well as log4j. As for the logfiles, I might need more information than what is available to MySQL. Log4j might be a solution, but I do not see how it is able to write to MySQL tables. Also, I would like to have some associations preserved in my log table (e.g. the user id) to let the db do the basic filtering etc. Directions on this one appreciated.
What would you recommend? Is there even any built-in support in Hibernate/Spring or is log4j the right way to go?
Thanks!
Log4j is modular, you can write your own backend that writes the log into a database if you wish to do so; in fact, it even comes with a JDBC appender right out of box, although make note of the big red warning there.
For Hibernate, you probably can build something on the interceptors and events that keep track of all modifications and log them to a special audit table.
Have you looked into using a MappedSuperclass for C(R)UD operation logging?
#MappedSuperclass
public class BaseEntity {
#Basic
#Temporal(TemporalType.TIMESTAMP)
public Date getLastUpdate() { ... }
public String getLastUpdater() { ... }
...
}
#Entity class Order extends BaseEntity {
#Id public Integer getId() { ... }
...
}
In case you go for logging solution and looking for doing it yourself, try searching for JDBCAppender, it's not perfect but should work.
In case you want off the shelf product for centralized logging - consider trying logFaces - it can write directly into your own database (Disclosure: I am the author of this product.)