In order to monitor mysql table change in django, I have written some codes as follows,
while not find_close_signal():
time.sleep(10)
if MyProject.models.MyModel.objects.all().exists():
some_execution()
However, it doesn't work. If there is no entries in the table at the beginning, then some_execution() will never run even later there are records populated into that table through other out-band ways.
Does anyone ever meet such kind of issue?
I also found in "manage.py shell", this problem happens exactly the same: any other entries added into db out of this shell can not be found in this shell. Is this true or I've made some mistake? Thanks
not sure that this is an issue, but it may be that you have this code executed inside a transaction, so in such a case you can't see any changes.
one more thing is that you can skip ".all()" in this check.
Take a look at my answers here:
Why doesn't this loop display an updated object count every five seconds?
How do I deal with this race condition in django?
You can mark the db as 'dirty' so to make Django discarding cached results:
from django.db import transaction
transaction.set_dirty()
Related
every time I try to execute a new action I get an error. I guess it's because some of the previuos actions can't be executed twice, but in the tutorial I'm watching, the guy can do it without problems. How can I fix this?
Thanks in advance!
If the error you are referring to is the 'Table...already exists', it looks like you may want to start your script with:
drop database if exists ContactMe;
That will remove all tables and data in that database.
"action" is some concept invented by the UI you are using, by the way, not something that has any specific meaning in mysql.
This may sound like an opinion question, but it's actually a technical one: Is there a standard process for maintaining a simple data set?
What I mean is this: let's say all I have is a list of something (we'll say books). The primary storage engine is MySQL. I see that Solr has a data import handler. I understand that I can use this to pull in book records on a first run - is it possible to use this for continuous migration? If so, would it work as well for updating books that have already been pulled into Solr as it would for pulling in new book records?
Otherwise, if the data import handler isn't the standard way to do it, what other ways are there? Thoughts?
Thank you very much for the help!
If you want to update documents from within Solr, I believe you'll need to use the UpdateRequestHandler as opposed to the DataImportHandler. I've never had need to do this where I work, so I don't know all that much about it. You may find this link of interest: Uploading Data With Index Handlers.
If you want to update Solr with records that have newly been added to your MySQL database, you would use the DataImportHandler for a delta-import. Basically, how it works is you have some kind of field in MySQL that shows the new record is, well, new. If the record is new, Solr will import it. For example, where I work, we have an "updated" field that Solr uses to determine whether or not it should import that record. Here's a good link to visit: DataImportHandler
The question looks similar to the one which we are doing, but not with SQL. Its with HBase(hadoop stack DB). However there we have Hbase indexer, which after mapping DB with Solr, listens to the events in hbase(DB) for new rows, and then executes code to fetch those values from DB and add in Solr. Not sure if there is such for SQL. However the concept looks similar. IN SQL I know about triggers which can listen to inserts and updates. At that even, you can trigger something to execute the steps of adding them in continuosly manner.
Today I have tried to fire the job that checks for the redundancy in the particular table.
I have one table EmpDetails
Please find the screenshot to find the records in the table
A job runs from the sql in every 2 min and delete the redundancy from the table.
Result of the job: :
But my expectations from the job are some bit higher, I want from the job to check the the redudancy from the whole database not from the single table.
Can anyone please suggest me, is that really possible or not. If it is possible so what should be the right approach. Thanks in advance.
You should first define what a duplicate is. And for running across multiple DB use can either loop through the databases or you can use EXEC sp_MSforeachdb which is an undocumented sp
Thanks
I have a bit of a problem. When I set up a SSIS package and i fire it off it shows me the amount of rows that is going into the SQL table, but when I query the table there is almost 40000 rows missing from what the last count was after the conditional split that I have in the package.
What causes this problem? Even if I have it on normal table or view it still does the same thing. But here I have to use the fastload option as it is a lot of source files being loaded. This is only testing before sending it to production and I am stuck at the moment. Is there a way I can work around this problem and get all the data that is supposed to be pumped into the table. please also take note that in the conditional split it removes any NULL values as seen in first picture.
Check the Error Output (under Connection Manager and Mappings) within Destination Component. If the Error setting is set to Ignore Failure or Redirect Row, the component will succeed, but only the successful rows will be inserted.
What is the data source? Try checking your data and make sure you don't have any terminators stored in one of the rows.
I use Kettle for some transformations and ran into a problem:
For one specific row, my DatabaseLookup step hangs. It just doesn't give a result. Trying to stop the transformation results in a never ending "Halting" for the lookup step.
The value given is nothing complicated at all, neither it is different from all other rows/values. It just won't continue.
Doing the same query in the database directly or in a different database tool (e.g. SQuirreL), it works.
I use Kettle/Spoon 4.1, the database is MySQL 5.5.10. It happens with Connector/J 5.1.14 and the one bundled with spoon.
The step initializes flawlessly (it even works for other rows) and I have no idea why it fails. No error message in the Spoon logs, nothing on the console/shell.
weird. Whats the table type? is it myisam? Does your transform also perform updates to the same table? maybe you are locking the table inadvertantly at the same time somehow?
Or maybe it's a mysql 5.5 thing.. But ive used this step extensively with mysql 5.0 and pdi 4.everything and it's always been fine... maybe post the transform?
I just found the culprit:
The lookup takes as a result the id field and gave it a new name, PERSON_ID. This FAILS in some cases! The resulting lookup/prepared statement was something like
select id as PERSON_ID FROM table WHERE ...
SOLUTION:
Don't use underscore in the "New name" for the field! With a new name of PERSONID everything works flawlessly for ALL rows!
Stupid error ...