I am aware of syncdb and makemigrations, but we are restricted to do that in production environment.
We recently had couple of tables created on production. As expected, tables were not visible on admin for any user.
Post that, we had below 2 queries executed manually on production sql (i ran migration on my local and did show create table query to fetch raw sql)
django_content_type
INSERT INTO django_content_type(name, app_label, model)
values ('linked_urls',"urls", 'linked_urls');
auth_permission
INSERT INTO auth_permission (name, content_type_id, codename)
values
('Can add linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'add_linked_urls'),
('Can change linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'change_linked_urls'),
('Can delete linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'delete_linked_urls');
Now this model is visible under super-user and is able to grant access to staff users as well, but staff users cant see it.
Is there any table entry that needs to be entered in it?
Or is there any other way to do a solve this problem without syncdb, migrations?
We recently had couple of tables created on production.
I can read what you wrote there in two ways.
First way: you created tables with SQL statements, for which there are no corresponding models in Django. If this is the case, no amount of fiddling with content types and permissions that will make Django suddenly use the tables. You need to create models for the tables. Maybe they'll be unmanaged, but they need to exist.
Second way: the corresponding models in Django do exist, you just manually created tables for them, so that's not a problem. What I'd do in this case is run the following code, explanations follow after the code:
from django.contrib.contenttypes.management import update_contenttypes
from django.apps import apps as configured_apps
from django.contrib.auth.management import create_permissions
for app in configured_apps.get_app_configs():
update_contenttypes(app, interactive=True, verbosity=0)
for app in configured_apps.get_app_configs():
create_permissions(app, verbosity=0)
What the code above does is essentially perform the work that Django performs after it runs migrations. When the migration occurs, Django just creates tables as needed, then when it is done, it calls update_contenttypes, which scans the table associated with the models defined in the project and adds to the django_content_type table whatever needs to be added. Then it calls create_permissions to update auth_permissions with the add/change/delete permissions that need adding. I've used the code above to force permissions to be created early during a migration. It is useful if I have a data migration, for instance, that creates groups that need to refer to the new permissions.
So, finally i had a solution.I did lot of debugging on django and apparanetly below function (at django.contrib.auth.backends) does the job for providing permissions.
def _get_permissions(self, user_obj, obj, from_name):
"""
Returns the permissions of `user_obj` from `from_name`. `from_name` can
be either "group" or "user" to return permissions from
`_get_group_permissions` or `_get_user_permissions` respectively.
"""
if not user_obj.is_active or user_obj.is_anonymous() or obj is not None:
return set()
perm_cache_name = '_%s_perm_cache' % from_name
if not hasattr(user_obj, perm_cache_name):
if user_obj.is_superuser:
perms = Permission.objects.all()
else:
perms = getattr(self, '_get_%s_permissions' % from_name)(user_obj)
perms = perms.values_list('content_type__app_label', 'codename').order_by()
setattr(user_obj, perm_cache_name, set("%s.%s" % (ct, name) for ct, name in perms))
return getattr(user_obj, perm_cache_name)
So what was the issue?
Issue lied in this query :
INSERT INTO django_content_type(name, app_label, model)
values ('linked_urls',"urls", 'linked_urls');
looks fine initially but actual query executed was :
--# notice the caps case here - it looked so trivial, i didn't even bothered to look into it untill i realised what was happening internally
INSERT INTO django_content_type(name, app_label, model)
values ('Linked_Urls',"urls", 'Linked_Urls');
So django, internally, when doing migrate, ensures everything is migrated in lower case - and this was the problem!!
I had a separate query executed to lower case all the previous inserts and voila!
Related
i have a problem getting some records from the data base of a biometric, it's a biostar suprema, i have the next code to connect from a python program.
cursor.execute("SELECT t1.USRID, SRVDT, DEVUID, EVTLGUID, EVT, NM, t1.USRGRUID, USRUID, date(SRVDT), WEEK(SRVDT, 0) AS SEMANA, TIME(SRVDT), DRUID , date(STTDT) FROM biostar2_ac.t_lg"+str(aniomes2)+" t1 INNER JOIN biostar2_ac.t_usr t2 ON t1.USRID = t2.USRID WHERE ( date(SRVDT) BETWEEN "+str(aniomes2)+firstDay+" AND "+str(aniomes2)+fecha2+" ) ORDER BY USRGRUID, USRUID, SRVDT, EVTLGUID")
And the problem is that a record has some missing data when i try to get it from my program but if i search the record in the biostar webapp the record is complete.
i didn't created the database it's seems like it comes from default with the biometric system, i don't know if there is a hidden database and the database i'm getting into is copy of the main database, i look for the record in the database i'm using in my program but it's doesn't appear, so, there's nothing wrong in the code, i suspect that there is another database or instance that the webapp is consulting.
I am trying to write a Spring Batch Starter job that reads a CSV file and inserts the records into a MySQL DB. When it begins I want to save the start time in a tracking table, and when it ends, the end time in that same table. The table structure is like:
TRACKING : id, start_time, end_time
DATA: id, product, version, server, fk_trk_id
I am unable to find an example project that does such a thing. I believe this needs to be a Spring Batch Starter project that can handle multiple queries. i.e.
// insert start time
1. INSERT INTO tracking (start_time) VALUES (NOW(6));
// get last inserted id for foreign key
2. SET #last_id_in_tracking = LAST_INSERT_ID();
// read from CSV and insert data into 'data' DB table
3. INSERT INTO data (product, version, server, fk_trk_id) VALUES (mysql, 5.1.42, Server1, #last_id_in_tracking);
4. INSERT INTO data (product, version, server, fk_trk_id) VALUES (linux, 7.0, Server2, #last_id_in_tracking);
5. INSERT INTO data (product, version, server, fk_trk_id) VALUES (java, 8.0, Server3, #last_id_in_tracking);
// insert end time
6. UPDATE tracking SET end_time = NOW(6) WHERE fk_trk_id = #last_id_in_table1;
I'd like sample code and explanation on how to use those queries to multiple tables in the same Spring Batch Starter job.
start of edit section - additional question
I do have an additional question. In my entities I have them set-up to represent the relationships with annotations (i.e #ManyToOne, #JoinColumn)...
In your code, how would I get the trackingId from a referenced object? Let me explain:
My Code (Data.java):
#JsonManagedReference
#ManyToOne
#JoinColumn(name = "id")
private Tracking tracking;
Your code (Data.java):
#Column(name = "fk_trk_id")
private Long fkTrkId;
Your code (JobConfig.java):
final Data data = new Data();
data.setFkTrkId(trackingId);
How do I set the id with "setFkTrkId" when the relationship in my Entity is an object?
end of edit section - additional question
Here is an example app that does what you're asking. Please see the README for details.
https://github.com/joechev/examples/tree/master/csv-reader-db-writer
I have created a project for you as an example. Please refer to https://bigzidane.wordpress.com/2018/02/25/spring-batch-mysql-reader-writer-processor-listener/
This example simply has a Reader/Processor/Writer. The reader will read a CSV file and then process something and then write to database.
And we have a listener to capture StartJob and EndJob. For Start Job, we will insert an entry to DB and then return a generatedId. We will pass the same ID to writer when we stored entries.
Note: I'm sorry I'm reused an example I have already. So it may not match 100% as your question but technically it should be the same.
Thanks,
Nghia
I have transfered my project from MySQL to PostgreSQL and tried to drop the column as result of previous issue, because after I removed the problematic column from models.py and saved. error didn't even disappear. Integer error transferring from MySQL to PostgreSQL
Tried both with and without quotes.
ALTER TABLE "UserProfile" DROP COLUMN how_many_new_notifications;
Or:
ALTER TABLE UserProfile DROP COLUMN how_many_new_notifications;
Getting the following:
ERROR: relation "UserProfile" does not exist
Here's a model, if helps:
class UserProfile(models.Model):
user = models.OneToOneField(User)
how_many_new_notifications = models.IntegerField(null=True,default=0)
User.profile = property(lambda u: UserProfile.objects.get_or_create(user=u)[0])
I supposed it might have something to do with mixed-case but I have found no solution through all similar questions.
Yes, Postgresql is a case aware database but django is smart enough to know that. It converts all field and it generally converts the model name to a lower case table name. However the real problem here is that your model name will be prefixed by the app name. generally django table names are like:
<appname>_<modelname>
You can find out what exactly it is by:
from myapp.models import UserProfile
print (UserProfile._meta.db_table)
Obviously this needs to be typed into the django shell, which is invoked by ./manage.py shell the result of this print statement is what you should use in your query.
Client: DataGrip
Database engine: PostgreSQL
For me this worked opening a new console, because apparently from the IDE cache it was not recognizing the table I had created.
Steps to operate with the tables of a database:
Database (Left side panel of the IDE) >
Double Click on PostgreSQL - #localhost >
Double Click on the name of the database >
Right click on public schema >
New > Console
GL
I'm fairly new to RSpec and have been trying to create some tests for my website, on which a user can post a reservation to the website, which is then saved to our database. I've been trying, using Rspec and Capybara, to simulate a user posting a reservation to the website. We have an existing test database, and at the end of the Rspec test want the new reservation to be written to the database, and not removed at the end of the Rspec test.
One of two things happens when we run the code: either it "works" but the new reservation can't be found in the database, or we get this error:
Failure/Error: Unable to find matching line from backtrace
ActiveRecord::StatementInvalid:
Mysql2::Error: This connection is in use by: #<Thread:0x007fb421fd6218 sleep>: SELECT `users`.* FROM `users` WHERE `users`.`id` = 6 ORDER BY `users`.`id` ASC LIMIT 1
# ./app/controllers/application_controller.rb:95:in `pass_login_status_to_js'
# ./app/middleware/search_suggestions.rb:12:in `call'
Why would this be happening? I realize that Capybara isn't generally meant to be making permanent changes to a database; is there a different program/gem you recommend?
I currently have config.use_transactional_fixtures = false, and also have added the following on the recommendation of a few websites:
class ActiveRecord::Base
mattr_accessor :shared_connection
##shared_connection = nil
def self.connection
##shared_connection || retrieve_connection
end
end
ActiveRecord::Base.shared_connection = ActiveRecord::Base.connection
To reiterate, I do want Capybara to be writing to my database (we use SQL). What can I do differently? Does it have something to do with database cleaner?
Yes, it has everything to do with database_cleaner. If you have it setup properly, it will clean your database between scenarios, to keep the tests isolated.
There are a few ways to do what you want:
You can explicitly tell database_cleaner not to clean certain tables between scenarios:
DatabaseCleaner.strategy = :transaction, {except: [:countries, :states]}
DatabaseCleaner.clean_with(:truncation, {except: [:countries, :states]})
You can add your code to a before(:each) or before(:all) block
You can add your data to one or many fixtures
There are only a few cases where you should share data between scenarios (ie. countries, states tables, which are good candidates for #3)
In any other case, I advise against sharing data between scenarios.
I've got to add like 25000 records to database at once in Rails.
I have to validate them, too.
Here is what i have for now:
# controller create action
def create
emails = params[:emails][:list].split("\r\n")
#created_count = 0
#rejected_count = 0
inserts = []
emails.each do |email|
#email = Email.new(:email => email)
if #email.valid?
#created_count += 1
inserts.push "('#{email}', '#{Date.today}', '#{Date.today}')"
else
#rejected_count += 1
end
end
return if emails.empty?
sql = "INSERT INTO `emails` (`email`, `updated_at`, `created_at`) VALUES #{inserts.join(", ")}"
Email.connection.execute(sql) unless inserts.empty?
redirect_to new_email_path, :notice => "Successfuly created #{#created_count} emails, rejected #{#rejected_count}"
end
It's VERY slow now, no way to add such number of records 'cause of timeout.
Any ideas? I'm using mysql.
Three things come into mind:
You can help yourself with proper tools like:
zdennis/activerecord-import or jsuchal/activerecord-fast-import. The problem is with, your example, that you will also create 25000 objects. If you tell activerecord-import to not use validations, it will not create new objects (activerecord-import/wiki/Benchmarks)
Importing tens thousands of rows into relational database will never be super fast, it should be done asynchronously via background process. And there are also tools for that, like DelayedJob and more: https://www.ruby-toolbox.com/
Move the code that belongs to model out of controller(TM)
And after that, you need to rethink the flow of this part of application. If you're using background processing inside a controller action like create, you can not just simply return HTTP 201, or HTTP 200. What you need to do is to return "quick" HTTP 202 Accepted, and provide a link to another representation where user could check the status of their request (do we already have success response? how many emails failed?), as it is in now beeing processed in the background.
It can sound a bit complicated, and it is, which is a sign, that you maybe shouldn't do it like that. Why do you have to add like 25000 records in one request? What's the backgorund?
Why don't you create a rake task for the work? The following link explains it pretty well.
http://www.ultrasaurus.com/sarahblog/2009/12/creating-a-custom-rake-task/
In a nutshell, once you write your rake task, you can kick off the work by:
rake member:load_emails
If speed is your concern, I'd attack the problem from a different angle.
Create a table that copies the structure of your emails table; let it be emails_copy. Don't copy indexes and constraints.
Import the 25k records into it using your database's fast import tools. Consult your DB docs or see e.g. this answer for MySQL. You will have to prepare the input file, but it's way faster to do — I suppose you already have the data in some text or tabular form.
Create indexes and constraints for emails_copy to mimic emails table. Constraint violations, if any, will surface; fix them.
Validate the data inside the table. It may take a few raw SQL statements to check for severe errors. You don't have to validate emails for anything but very simple format anyway. Maybe all your validation could be done against the text you'll use for import.
insert into emails select * from emails_copy to put the emails into the production table. Well, you might play a bit with it to get autoincrement IDs right.
Once you're positive that the process succeeded, drop table emails_copy.