i have a problem getting some records from the data base of a biometric, it's a biostar suprema, i have the next code to connect from a python program.
cursor.execute("SELECT t1.USRID, SRVDT, DEVUID, EVTLGUID, EVT, NM, t1.USRGRUID, USRUID, date(SRVDT), WEEK(SRVDT, 0) AS SEMANA, TIME(SRVDT), DRUID , date(STTDT) FROM biostar2_ac.t_lg"+str(aniomes2)+" t1 INNER JOIN biostar2_ac.t_usr t2 ON t1.USRID = t2.USRID WHERE ( date(SRVDT) BETWEEN "+str(aniomes2)+firstDay+" AND "+str(aniomes2)+fecha2+" ) ORDER BY USRGRUID, USRUID, SRVDT, EVTLGUID")
And the problem is that a record has some missing data when i try to get it from my program but if i search the record in the biostar webapp the record is complete.
i didn't created the database it's seems like it comes from default with the biometric system, i don't know if there is a hidden database and the database i'm getting into is copy of the main database, i look for the record in the database i'm using in my program but it's doesn't appear, so, there's nothing wrong in the code, i suspect that there is another database or instance that the webapp is consulting.
Related
I was able to implement a connection from R through RMariaDB and DBI to a remote MariaDB-database. However, I am currently encountering a strange change of numbers when querying the database through R. I'll explain the differences:
I inserted one simple entry in my database with the following command:
INSERT INTO respondent ( id, name ) VALUES ( 2388793051, 'testuser' )
When I connect to this database directly on the remote server and execute a statement like this:
SELECT * FROM respondent;
it delivers these value
id: 2388793051, name: testuser
So I should also be able to connect to the database via R and receive the same results. So when I execute the following code in R, I expect to receive this inserted and saved information displayed above:
library(DBI)
library(RMariaDB)
conn <- DBI::dbConnect(drv=RMariaDB::MariaDB(), user="myusername", password="mypassword", host="127.0.0.1", port="1111", dbname="mydbname")
res <- dbGetQuery(conn, "SELECT * FROM respondent")
print(res)
However, the result of this query is the following
id name
-1906174245 testuser
As you can see, the id is now -1906174245 instead of the saved 2388793051 in the database. I don't understand this weird conversion of integers in the id-field. Can someone explain how this problem emerges and how I might solve it?
EDIT: I don't expect this to be a problem, but just to inform you: I am using an SSH tunnel to enable a connection via these specified ports from my local to my remote machine.
SOLUTION: What made the difference was to specify the id of a respondent in the database specification already as BIGINT instead of INT. Thanks to #JonnyCrunch
I have several procedures with same name but in different schemas. When these procedures raise an error, it is possible in parent procedure (which is calling these nested stored procedures) get schema of the procedure which raised an error ? For example i can get name from ERROR_PROCEDURE() but is there some option to get also SCHEMA ? Because otherwise i am not sure which exactly procedure throwed an error if there are many with same name.
I guess this feature is still missing
https://connect.microsoft.com/SQLServer/feedback/details/124627/schema-not-reported-in-the-error-procedure-function
but is there some workaround for this ?
Sadly, there is no 100% Workaround for this limitation in SQL-Server.
Shame on the MSSQL Dev Team for not rectifying this, well over a Decade later.
It should be as simple as adding a New Function like ERROR_ProcedureSchema() or ERROR_PROCID().
Here is a revived Post Requesting this Feature from way back in May of 2005:
https://feedback.azure.com/forums/908035-sql-server/suggestions/32894584-schema-not-reported-in-the-error-procedure-functio
I prefer to Log as much detail as possible about Exceptions I capture in my custom Error Handling Logic.
This is the best I could come up with to find the Schema Name:
DECLARE #Error_ProcSchemaName nVarChar(128)--Leave as Null if found in more than 1 Schema.
--Only Populate the #Error_ProcSchemaName if it Belongs to 1 Schema. - 04/08/2019 - MCR.
SELECT #Error_ProcSchemaName = S.name
FROM sys.objects as O
JOIN sys.schemas as S
ON S.schema_id = O.schema_id
JOIN
(
SELECT O.name[ObjectName], COUNT(*)[Occurrences]
FROM sys.objects as O
GROUP BY O.name
) AS Total
ON Total.ObjectName = O.name
WHERE O.name = ERROR_PROCEDURE()
AND Total.Occurrences = 1
Avoid using anything like OBJECT_SCHEMA_NAME(OBJECT_ID(ERROR_PROCEDURE())) as the string you pass into OBJECT_ID() should already have the Schema in it (which ERROR_PROCEDURE() does not).
Otherwise it will default to your Default Schema, which (in most cases) is dbo.
Run this Query to View all your Object Names that are Reused across Schemas:
--View Object Names that Exist in Multiple Schemas: - 04/08/2019 - MCR.
SELECT S.name[SchemaName], O.name[ObjectName], Total.Occurrences,
O.type[Type], O.type_desc[TypeDesc],
O.object_id[ObjectID], O.principal_id[PrincipalID], O.parent_object_id[ParentID],
O.is_ms_shipped[MS], O.create_date[Created], O.modify_date[Modified]
FROM sys.objects as O
JOIN sys.schemas as S
ON S.schema_id = O.schema_id
JOIN
(
SELECT O.name[ObjectName], COUNT(*)[Occurrences]
FROM sys.objects as O
GROUP BY O.name
) AS Total
ON Total.ObjectName = O.name
WHERE Total.Occurrences > 1
ORDER BY [ObjectName], [SchemaName]
If you only have a few Objects (Sprocs and Triggers) that overlap, then you might be okay not knowing the Schema as it may be obvious where it originated from.
However, if this is not the case, then you may need to either:
Change the Name of the Sproc/Trigger to make it Unique.
This option goes against the very fiber of my being.
If you are using Advanced Error Handling, then manually add the Schema of your Sproc/Trigger with OBJECT_SCHEMA_NAME(##PROCID)
in your Catch-Block when logging the Error.
Note: These options may not be possible due to the use of 3rd Party Sprocs you are not allowed to edit.
When Troubleshooting with multiple Sprocs/Triggers that share the same Name, you might be able to write a Custom Wrapper-Sproc to call your 3rd Party Sproc, then Log any exception thrown in your Wrapper to know exactly which Schema/Sproc caused it.
The Code Smell:
If you have multiple Sprocs/Triggers with the same name spread across various Schemas
then I would call that a "Code Smell".
Meaning, your Architecture is flawed.
You may not be properly encapsulating your logic for reuse.
There will be times when a name overlaps Schemas, but this should be rare and by coincidence only.
Misappropriating Schemas for Handling Multi-Tenant / UserGroup Access:
If you are trying something Multi-Tenant (storing data from different Organizations/UserGroups in the same Database and preventing them from seeing eachother's info) and running almost the same logic in each Schema that shares the Object Name, then that's a Design Problem.
You should have your data in Different Databases if Users will be accessing it directly
or have a TenantID or UserGroupID you always pass in and filter on everywhere when Users will be accessing from a Custom Application.
Some possible solutions I can think of:
Renaming each Stored Procedure so that they have different names in the different schemas.
Adding some debugging output to the Stored Procedures so that when they are being executed, you can see which one was in progress when your error occurred.
Running the SQL Profiler to see what is being called at the time your error occurs.
However, these are coming more from the perspective of trying to troubleshoot an issue you're having right now, rather than building in some error handling for potential future troubleshooting. You could always get these Stored Procedures to write some log files to disk somewhere so you can interrogate those logs when an error is experienced perhaps.
I am aware of syncdb and makemigrations, but we are restricted to do that in production environment.
We recently had couple of tables created on production. As expected, tables were not visible on admin for any user.
Post that, we had below 2 queries executed manually on production sql (i ran migration on my local and did show create table query to fetch raw sql)
django_content_type
INSERT INTO django_content_type(name, app_label, model)
values ('linked_urls',"urls", 'linked_urls');
auth_permission
INSERT INTO auth_permission (name, content_type_id, codename)
values
('Can add linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'add_linked_urls'),
('Can change linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'change_linked_urls'),
('Can delete linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'delete_linked_urls');
Now this model is visible under super-user and is able to grant access to staff users as well, but staff users cant see it.
Is there any table entry that needs to be entered in it?
Or is there any other way to do a solve this problem without syncdb, migrations?
We recently had couple of tables created on production.
I can read what you wrote there in two ways.
First way: you created tables with SQL statements, for which there are no corresponding models in Django. If this is the case, no amount of fiddling with content types and permissions that will make Django suddenly use the tables. You need to create models for the tables. Maybe they'll be unmanaged, but they need to exist.
Second way: the corresponding models in Django do exist, you just manually created tables for them, so that's not a problem. What I'd do in this case is run the following code, explanations follow after the code:
from django.contrib.contenttypes.management import update_contenttypes
from django.apps import apps as configured_apps
from django.contrib.auth.management import create_permissions
for app in configured_apps.get_app_configs():
update_contenttypes(app, interactive=True, verbosity=0)
for app in configured_apps.get_app_configs():
create_permissions(app, verbosity=0)
What the code above does is essentially perform the work that Django performs after it runs migrations. When the migration occurs, Django just creates tables as needed, then when it is done, it calls update_contenttypes, which scans the table associated with the models defined in the project and adds to the django_content_type table whatever needs to be added. Then it calls create_permissions to update auth_permissions with the add/change/delete permissions that need adding. I've used the code above to force permissions to be created early during a migration. It is useful if I have a data migration, for instance, that creates groups that need to refer to the new permissions.
So, finally i had a solution.I did lot of debugging on django and apparanetly below function (at django.contrib.auth.backends) does the job for providing permissions.
def _get_permissions(self, user_obj, obj, from_name):
"""
Returns the permissions of `user_obj` from `from_name`. `from_name` can
be either "group" or "user" to return permissions from
`_get_group_permissions` or `_get_user_permissions` respectively.
"""
if not user_obj.is_active or user_obj.is_anonymous() or obj is not None:
return set()
perm_cache_name = '_%s_perm_cache' % from_name
if not hasattr(user_obj, perm_cache_name):
if user_obj.is_superuser:
perms = Permission.objects.all()
else:
perms = getattr(self, '_get_%s_permissions' % from_name)(user_obj)
perms = perms.values_list('content_type__app_label', 'codename').order_by()
setattr(user_obj, perm_cache_name, set("%s.%s" % (ct, name) for ct, name in perms))
return getattr(user_obj, perm_cache_name)
So what was the issue?
Issue lied in this query :
INSERT INTO django_content_type(name, app_label, model)
values ('linked_urls',"urls", 'linked_urls');
looks fine initially but actual query executed was :
--# notice the caps case here - it looked so trivial, i didn't even bothered to look into it untill i realised what was happening internally
INSERT INTO django_content_type(name, app_label, model)
values ('Linked_Urls',"urls", 'Linked_Urls');
So django, internally, when doing migrate, ensures everything is migrated in lower case - and this was the problem!!
I had a separate query executed to lower case all the previous inserts and voila!
I have a MySQL table that I am reading with the RMySQL package of R. I would like to be able to directly refer to the data frame stored in the table so I can seamlessly interact with it rather than having to execute RMySQL statement every time I want to do something. Is there a way to accomplish this? I tried:
data <- dbReadTable(conn = con, name = 'tablename')
For example, if I now want to check how many rows I have in this table I would run:
nrow(data)
Does this go through the database connection, or am I now storing the object "data" locally, defeating the whole purpose of using an external database?
data <- dbReadTable(conn = con, name = 'tablename')
This command downloads all the data into a local R dataframe (assuming you have enough RAM). Any operations with data from that point forward do not require the SQL connection.
I used leaves stru2mysql.prg and vfp2mysql_upload.prg to create a .sql dump file from DBF's. I connect to mysql database from vfp using ODBC.I KNOW how upload the sql dump file but i need to automate the whole process i.e after creating the dump file,my visual foxpro program can upload the dump file without a third party(automatically). I thought of using the source command but that needs to be run in mysql prompt.The assumption here is that my end users dont know how to import(which most of them dont).Please advice on how i can automate importation of sql file to mysql database.thank you
I think what you are looking for are the various SQL* functions in Foxpro. See the VFP help or MSDN on SQLCONNECT (or SQLSTRINGCONNECT), SQLEXEC, and SQLDISCONNECT functions to get you started. Microsoft provided good examples on each in the documentation.
You may also want to use FILETOSTR to get the output from Leafe's programs into a string for the SQLEXEC function.
Here's the steps I use to take data from a Visual FoxPro Database and upload to a MySql Database. These are all put into a custom method on a form, which is fired by a command button. For example the method would be 'uploadnewdata' and I pass parameters for whichever data tables I need
1) Connect to the Server - I use MySql ODBC
2) Validate the user (this uses a SQLEXEC to pull the correct matching record for a users tables
IF M.WorkingDatabase<>-1
nRetVal=SQLEXEC(m.WorkingDatabase,"SELECT * FROM users", "csrUsersOnServer")
SELECT csrUsersOnServer
SELECT userid,FROM csrUsersOnServer;
WHERE ALLTRIM(UPPER(userid))=ALLTRIM(UPPER(lcRanchUser));
AND ALLTRIM(UPPER(lcPassWord))=ALLTRIM(UPPER(lchPassWord));
INTO CURSOR ValidUsers
IF _TALLY>=1
ELSE
=MESSAGEBOX("Your Premise ID Does Not Match Any Records On The Server","System Message")
RETURN 0
ENDIF
ELSE
=MESSAGEBOX("Unable To Connect To Your Database", "System Message")
RETURN 0
ENDIF
3) Once that is successful I create my base cursor (this is the one I'm sending from)
4) I then loop through that cursor creating variable for the values in the fields
5) then using the SQLEXEC, and INSERT INTO, I update each record
6) once the program is finished processing the cursor, it generates a messagebox with the 'finished' message and control returns to the form.
All the user has to do, is select the starting table and enter their login information