I am trying to insert the data in the bucket. But I didnt able to find any query logged in .log file which is present in the /opt/couchbase/var/lib/couchbase/logs# path.
For Example-
INSERT INTO Employee (KEY, VALUE) VALUES
( "Emp Id::0199", { "Emp Name": "Ana", "Emp Company" : "GS Lab", "Emp Country" : "India"} )
RETURNING *;
Where is this insert query is logged in the log folder-
enter image description here
N1QL info will be logged in the query.log, indexer info will be logged in indexer.log
Logging every SQL statements will be expensive due to concurrency and disk i/o. So, SQL statements will not be logged into the log files.
If the query taking more than 1sec it will logged into completed_requests.
You can do select * from system:completed_requests;
Checkout this post more details https://forums.couchbase.com/t/identify-top-n-queries-in-couchbase/28138
Other option will be enable audit and check audit entries.
Related
[
{"link":"https://twitter.com/GreenAddress/status/550793651186855937",
"pDate":"2015 01 1",
"title":"GreenAddress",
"description": "btcarchitect coinkite blockchain circlebits coinbase bitgo some maybe some are oracle cosigners which require lesszero trust"},
{"link":"https://twitter.com/Bit_Swift/status/550765718581411840",
"pDate":"2015 01 1",
"title":"Bitswift™",
"description": "swiftstealth offers you privacy in bitswift v2 swiftstealth enables stealth address use on the bitswift blockchain swift"},
{"link":"https://twitter.com/allenday/status/550741133500772352",
"pDate":"2015 01 1",
"title":"Allen Day, PhD",
"description": "all in one article bitcoin blockchain 3dprinting drones and deeplearninghttp simondlr compost101071618938adecentralizedaivia simondlr"}
]
my test.json file like this
and my mysql db table is here
i can input text file with csv type, but i have no idea how input json text file on mysql
i try [create table test ( data json);] and
[insert into test values ( '{json type}'); but when i try input data with csv type LOAD DATA INFILE 'test.txt' made it possible
so I wonder json has the same functionality
thanks for any advice
MySQL does have JSON data field. However, it will not work with your file and current table structure as it request a field to be JSON. To solve your data, will require a little bit of programming work. Depending on your current ability, you will need to write codes that does the following:
Open a database connection
Read the JSON and loop through each value
Store each value using the following INSERT query:
INSERT INTO news(link, date, title, description) VALUES($link, $pDate, $title, $description);
Depending on your language and database connection feature, close the database connection.
I have an ssis package which uses SQL command to get data from Progress database. Every time I execute the query, it throws this specific error:
ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Internal error -1 (buffer too small for generated record) in SQL from subsystem RECORD SERVICES function recPutLONG called from sts_srtt_t:::add_row on (ttbl# 4, len/maxlen/reqlen = 33/32/33) for . Save log for Progress technical support.
I am running the following query:
Select max(ROWID) as maxRowID from TableA
GROUP BY ColumnA,ColumnB,ColumnC,ColumnD
I've had the same error.
After change startup-parameter -SQLTempStorePageSize and -SQLTempStoreBuff to 24 and 3000 respectively the problem was solved.
I think, for you the values must be changed to 40 and 20000.
You can find more information here. The name of the parameter in that article was a bit different than in my Database, it depends on the Progress-version witch is used.
I am aware of syncdb and makemigrations, but we are restricted to do that in production environment.
We recently had couple of tables created on production. As expected, tables were not visible on admin for any user.
Post that, we had below 2 queries executed manually on production sql (i ran migration on my local and did show create table query to fetch raw sql)
django_content_type
INSERT INTO django_content_type(name, app_label, model)
values ('linked_urls',"urls", 'linked_urls');
auth_permission
INSERT INTO auth_permission (name, content_type_id, codename)
values
('Can add linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'add_linked_urls'),
('Can change linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'change_linked_urls'),
('Can delete linked_urls Table', (SELECT id FROM django_content_type where model='linked_urls' limit 1) ,'delete_linked_urls');
Now this model is visible under super-user and is able to grant access to staff users as well, but staff users cant see it.
Is there any table entry that needs to be entered in it?
Or is there any other way to do a solve this problem without syncdb, migrations?
We recently had couple of tables created on production.
I can read what you wrote there in two ways.
First way: you created tables with SQL statements, for which there are no corresponding models in Django. If this is the case, no amount of fiddling with content types and permissions that will make Django suddenly use the tables. You need to create models for the tables. Maybe they'll be unmanaged, but they need to exist.
Second way: the corresponding models in Django do exist, you just manually created tables for them, so that's not a problem. What I'd do in this case is run the following code, explanations follow after the code:
from django.contrib.contenttypes.management import update_contenttypes
from django.apps import apps as configured_apps
from django.contrib.auth.management import create_permissions
for app in configured_apps.get_app_configs():
update_contenttypes(app, interactive=True, verbosity=0)
for app in configured_apps.get_app_configs():
create_permissions(app, verbosity=0)
What the code above does is essentially perform the work that Django performs after it runs migrations. When the migration occurs, Django just creates tables as needed, then when it is done, it calls update_contenttypes, which scans the table associated with the models defined in the project and adds to the django_content_type table whatever needs to be added. Then it calls create_permissions to update auth_permissions with the add/change/delete permissions that need adding. I've used the code above to force permissions to be created early during a migration. It is useful if I have a data migration, for instance, that creates groups that need to refer to the new permissions.
So, finally i had a solution.I did lot of debugging on django and apparanetly below function (at django.contrib.auth.backends) does the job for providing permissions.
def _get_permissions(self, user_obj, obj, from_name):
"""
Returns the permissions of `user_obj` from `from_name`. `from_name` can
be either "group" or "user" to return permissions from
`_get_group_permissions` or `_get_user_permissions` respectively.
"""
if not user_obj.is_active or user_obj.is_anonymous() or obj is not None:
return set()
perm_cache_name = '_%s_perm_cache' % from_name
if not hasattr(user_obj, perm_cache_name):
if user_obj.is_superuser:
perms = Permission.objects.all()
else:
perms = getattr(self, '_get_%s_permissions' % from_name)(user_obj)
perms = perms.values_list('content_type__app_label', 'codename').order_by()
setattr(user_obj, perm_cache_name, set("%s.%s" % (ct, name) for ct, name in perms))
return getattr(user_obj, perm_cache_name)
So what was the issue?
Issue lied in this query :
INSERT INTO django_content_type(name, app_label, model)
values ('linked_urls',"urls", 'linked_urls');
looks fine initially but actual query executed was :
--# notice the caps case here - it looked so trivial, i didn't even bothered to look into it untill i realised what was happening internally
INSERT INTO django_content_type(name, app_label, model)
values ('Linked_Urls',"urls", 'Linked_Urls');
So django, internally, when doing migrate, ensures everything is migrated in lower case - and this was the problem!!
I had a separate query executed to lower case all the previous inserts and voila!
I have Cassandra DB with data that has TTL of X hour's for every column value and this needs to be pushed to ElasticSearch Cluster real time.
I have seen past posts on StackOverflow that advise using tools such as LogStash or pushing data directly from application layer.
However, How can one preserve the TTL of the data imported once the data is copied in ES Version >=5.0?
There was once a field called _ttlwhich has been deprecated in ES 2.0 and removed in ES 5.0.
As of ES 5, there are now two official ways of preserving the TTL of your data. First make sure to create a TTL field in your ES documents that would be set to the creation date of your row in Cassandra + the TTL seconds. So if in Cassandra you have a record like this:
INSERT INTO keyspace.table (userid, creation_date, name)
VALUES (3715e600-2eb0-11e2-81c1-0800200c9a66, '2017-05-24', 'Mary')
USING TTL 86400;
Then in ES, you should export the following document to ES:
{
"userid": "3715e600-2eb0-11e2-81c1-0800200c9a66",
"name": "mary",
"creation_date": "2017-05-24T00:00:00.000Z",
"ttl_date": "2017-05-25T00:00:00.000Z"
}
Then you can either:
A. Use a cron that will regularly perform a delete by query based on one of your ttl_date field, i.e. call the following command from your cron:
curl -XPOST localhost:9200/your_index/_delete_by_query -d '{
"query": {
"range": {
"ttl_date": {
"lt": "now"
}
}
}
}'
B. Or use time-based indices and insert each document in an index matching it's ttl_date field. For instance, the above document would be inserted in the index named your_index-2017-05-25. Then with the curator tool you can easily delete indices that have expired.
I'm trying get logs created by DMS.
I read DMS documents, and successfully captured DMS's SQL logging like following:
2017-02-17T00:58:29 [TARGET_APPLY ]D: Construct statement execute internal: 'UPDATE `some_schema`.`typical_usr_master` SET `id`=? WHERE `id`=? AND `start_dt`=? ''(ar_odbc_stmt.c:3323)
However this log doesn't have original binding values, for example id or start_dt.
If they are revealed, values would be like id = "00000001", start_dt = "2017-02-17".
Do we have any chance to see such bind values on DMS logging ?
Currently, I changed all logging level to DEBUG, but only ERROR logging shows such binding values.
I received an answer from AWS support.
How to reveal DMS SQL query logging with binding values ?
Unfortunately, we do not expose bind values in logs because of security concerns.
Normally, we recommend customers to look at their target to get the actual data
values we migrated.
I'm happy if there was a way to check data integrity.