Is there any way to run the python script with django project? - mysql

I am new to django and python. I have created a django web application, I also have a python script which I have to run at the backend of the web application in real-time(Like it should always check for new updates and tell the web about new responses from the API by generating notifications). I am using an IBM-Qradar API from which I have to display the data on the web application.
I have two problems
1) Is there any way I can use the below python code with my django project just like I described above?
2) and use the API response which is in json format to store the data into MySQL database directly from response variable.
I could Only find ways to store data into the database by using forms, which is not required for my project.
import json
import os
import sys
import importlib
sys.path.append(os.path.realpath('../modules'))
client_module = importlib.import_module('RestApiClient')
SampleUtilities = importlib.import_module('SampleUtilities')
def main():
# First we have to create our client
client = client_module.RestApiClient(version='9.0')
# -------------------------------------------------------------------------
# Basic 'GET'
# In this example we'll be using the GET endpoint of siem/offenses without
# any parameters. This will print absolutely everything it can find, every
# parameter of every offense.
# Send in the request
SampleUtilities.pretty_print_request(client, 'siem/offenses', 'GET')
response = client.call_api('siem/offenses', 'GET')
# Check if the success code was returned to ensure the call to the API was
# successful.
if (response.code != 200):
print('Failed to retrieve the list of offenses')
SampleUtilities.pretty_print_response(response)
sys.exit(1)
# Since the previous call had no parameters and response has a lot of text,
# we'll just print out the number of offenses
response_body = json.loads(response.read().decode('utf-8'))
print('Number of offenses retrieved: ' + str(len(response_body)))
# -------------------------------------------------------------------------
# Using the fields parameter with 'GET'
# If you just print out the result of a call to the siem/offenses GET
# endpoint there will be a lot of fields displayed which you have no
# interest in. Here, the fields parameter will make sure the only the
# fields you want are displayed for each offense.
# Setting a variable for all the fields that are to be displayed
fields = '''id%2Cstatus%2Cdescription%2Coffense_type%2Coffense_source%2Cmagnitude%2Csource_network%2Cdestination_networks%2Cassigned_to'''
# Send in the request
SampleUtilities.pretty_print_request(client, 'siem/offenses?fields='+fields, 'GET')
response = client.call_api('siem/offenses?fields=' +fields, 'GET')
# Once again, check the response code
if (response.code != 200):
print('Failed to retrieve list of offenses')
SampleUtilities.pretty_print_response(response)
sys.exit(1)
# This time we will print out the data itself
#SampleUtilities.pretty_print_response(response)
response_body = json.loads(response.read().decode('utf-8'))
print(response_body)
print(type(response_body))
for i in response_body:
print(i)
print("")
for j in response_body:
print(j['id'])
print(j['status'])
print(j['description'])

You can write your own management commands, and invoke them with
myprojectpath/manage.py mycommand args ...
The documentation is here
Note that this is required, if you wish to utilize Django models. There is a lot of environment setup which this will do for you behind the scenes. If you try to just import and use django models into a vanilla Python script, they won't work right.

Related

Run function in href flask

I have my website with thwo hrefs and under this hrefs i have my flask function download_file(). It's working but now I add on my function in flask a parameter. So now I want to run something like that download_file(1) when someone clicked on first href and download_file(2) when someone clicked on href2
<h2>Download a file </h2>
<p>
Download1
Download2
</p>
############ FLASK CODE
from flask import Flask, send_file, render_template
app = Flask(__name__)
#app.route('/')
def download():
return render_template('indexdownload.html')
#app.route('/download')
def download_file(number):
p = "test"
if number == 1:
p = "1.txt"
elif number == 2:
p = "2.txt"
return send_file(p, as_attachment=True)
app.run()
I try different ways but I always get some syntax errors because I don't know hot to do that.
An anchor with an href attribute sends a GET request.
There are several alternative ways to send data within a GET request.
Variable Rule
One way to pass data within a URL is to use variable rules. The values are embedded within the path, extracted server-side, and passed to the endpoint as arguments.
#app.route('/download/<int:num>')
def download(num):
# ...
URL Parameter
An alternative is to pass it as a URL parameter. In this case, the data must be requested within the endpoint and are not necessarily available.
The respective parameter is queried using the name of the object request.args. It is possible to define a default value and optionally to automatically convert the value by specifying a type.
#app.route('/download')
def download():
num = request.args.get('num', 0, type=int)
# ...
In both cases you can pass the value to the jinja2 statement url_for to form the url.
url_for('download', num=1)
Based on the code within the request and the intended use, I would suggest the first variant. So you are assuming that a value is specified. You can put your filename together and deliver the file.
Incidentally, you can also pass an entire file name within the URL.
#app.route('/download/<path:filename>')
def download(filename):
# ...
Download

How do I get django "Data too long for column '<column>' at row" errors to print the actual value?

I have a Django application. Sometimes in production I get an error when uploading data that one of the values is too long. It would be very helpful for debugging if I could see which value was the one that went over the limit. Can I configure this somehow? I'm using MySQL.
It would also be nice if I could enable/disable this on a per-model or column basis so that I don't leak user data to error logs.
When creating model instances from outside sources, one must take care to validate the input or have other guarantees that this data cannot violate constraints.
When not calling at least full_clean() on the model, but directly calling save, one bypasses Django's validators and will only get alerted to the problem by the database driver at which point it's harder to obtain diagnostics:
class JsonImportManager(models.Manager):
def import(self, json_string: str) -> int:
data_list = json.loads(json_string) # list of objects => list of dicts
failed = 0
for data in data_list:
obj = self.model(**data)
try:
obj.full_clean()
except ValidationError as e:
print(e.message_dict) # or use better formatting function
failed += 1
else:
obj.save()
return failed
This is of course very simple, but it's a good boilerplate to get started with.

Feeding JSON into Dynatable after user submission via Deform (Pyramid)

I'm building a webpage which takes user input and returns a table based on information in my backend. I'm using the web framework Pyramid. My current approach is the following:
Create a Colander Schema and a Deform form object that is rendered using a Chameleon template.
Once the user hits submit, validate the submission and use the input to generate a list of dictionaries.
Encode this result into JSON and feed it to dynatable.js
Display the dynatable below my submission form
Steps 3 and 4 are where I'm having an issue. I don't know what I need to do to expose my list of dictionaries to dynatable. I've read the Pyramid Quick Tutorial, so I have an idea of how to do simple AJAX with a JSON renderer, but I have no idea how to implement this into my current situation.
To give a better idea, my function for processing Deform input is as follows (part of the function and template is adapted from the Deform sample provided on the official Github repo):
#view_config(route_name='query_log', renderer='templates/form.pt')
def query_log(request):
schema = Device().bind(request=request)
# Create a styled button with some extra Bootstrap 3 CSS classes
process_btn = deform.form.Button(name='process', title="Process")
form = deform.form.Form(schema, buttons=(process_btn,), use_ajax=True)
# User submitted this form
if request.method == "POST":
if 'process' in request.POST:
try:
appstruct = form.validate(request.POST.items())
# Save form data from appstruct
print("Enter ID:", appstruct["ID"])
print("Enter date:", appstruct["date"])
# This variable is what I want to feed to dynatable
results = parse_log(appstruct["ID"], appstruct["date"].strftime('%Y-%m-%d'))
json.dumps(results)
# Create ppoup
request.session.flash('Returning Results.')
# Redirect to the page shows after succesful form submission
return HTTPFound("/")
except deform.exception.ValidationFailure as e:
# Render a form version where errors are visible next to the fields,
# and the submitted values are posted back
rendered_form = e.render()
else:
# Render a form with initial default values
rendered_form = form.render()
return {
# This is just rendered HTML in a string
# and can be embedded in any template language
"rendered_form": rendered_form,
}
How would I go about passing this results variable via JSON to Dynatable and have the table rendered below the form?
Try baby steps first.
After validation is successful in your try block:
try:
appstruct = form.validate(request.POST.items())
# following depends on structure of data you need
results = dict("id" = appstruct["ID"],
"date" = appstruct["date"].strftime('%Y-%m-%d'))
data = json.dumps(results)
return dict(data=data)
And in the target template, accept the data parameter, rendering to taste.
Once you have that in place, then you can work on how to pass data via an XHR request, which will depend on your choice of JavaScript library.

How to upload multiple JSON files into CouchDB

I am new to CouchDB. I need to get 60 or more JSON files in a minute from a server.
I have to upload these JSON files to CouchDB individually as soon as I receive them.
I installed CouchDB on my Linux machine.
I hope some one can help me with my requirement.
If possible can someone help me with pseudo code.
My Idea:
Is to write a python script for uploading all JSON files to CouchDB.
Each and every JSON file must be each document and the data present in
JSON must be inserted same into CouchDB
(the specified format with values in a file).
Note:
These JSON files are Transactional, every second 1 file is generated
so I need to read the file upload as same format into CouchDB on
successful uploading archive the file into local system of different folder.
python program to parse the json and insert into CouchDb:
import sys
import glob
import errno,time,os
import couchdb,simplejson
import json
from pprint import pprint
couch = couchdb.Server() # Assuming localhost:5984
#couch.resource.credentials = (USERNAME, PASSWORD)
# If your CouchDB server is running elsewhere, set it up like this:
couch = couchdb.Server('http://localhost:5984/')
db = couch['mydb']
path = 'C:/Users/Desktop/CouchDB_Python/Json_files/*.json'
#dirPath = 'C:/Users/VijayKumar/Desktop/CouchDB_Python'
files = glob.glob(path)
for file1 in files:
#dirs = os.listdir( dirPath )
file2 = glob.glob(file1)
for name in file2: # 'file' is a builtin type, 'name' is a less-ambiguous variable name.
try:
with open(name) as f: # No need to specify 'r': this is the default.
#sys.stdout.write(f.read())
json_data=f
data = json.load(json_data)
db.save(data)
pprint(data)
json_data.close()
#time.sleep(2)
except IOError as exc:
if exc.errno != errno.EISDIR: # Do not fail if a directory is found, just ignore it.
raise # Propagate other kinds of IOError.
I would use CouchDB bulk API, even though you have specified that you need to send them to db one by one. For example, by implementing a simple queue that gets sent out every say 5 - 10 seconds via a bulk doc call will greatly increase performance of your application.
There is obviously a quirk in that and that is you need to know the IDs of the docs that you want to get from the DB. But for the PUTs it is perfect. (it is not entirely true, you can get ranges of docs using bulk operation if the IDs you are using for your docs can be sorted nicely).
From my experience working with CouchDB, I have a hunch that you are dealing with Transactional documents in order to compile them into some sort of sum result and act on that data accordingly (maybe creating next transactional doc in series). For that you can rely on CouchDB by using 'reduce' functions on the views you create. It takes a little practice to get reduce function working properly and is highly dependent on what it is you actually what to achieve and what data you are prepared to emit by the view so I can't really provide you with more detail on that.
So in the end the app logic would go something like that:
get _design/someDesign/_view/yourReducedView
calculate new transaction
add transaction to queue
onTimeout
send all in transaction queue
If I got that first part of why you are using transactional docs wrong all that would really change is the part where you getting those transactional docs in my app logic.
Also, before writing your own 'reduce' function, have a look at buil-in ones (they are alot faster then anything outside of db engine can do)
http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API
EDIT:
Since you are starting, I strongly recommend to have a look at CouchDB Definitive Guide.
NOTE FOR LATER:
Here is one hidden stone (well maybe not so much a hidden stone but not an obvious thing to look out for for the new-comer in any case). When you write reduce function make sure that it does not produce too much output for the query without boundaries. This will extremely slow down the entire view even when you provide reduce=false when getting stuff from it.
So you need to get JSON documents from a server and send them to CouchDB as you receive them. A Python script would work fine. Here is some pseudo-code:
loop (until no more docs)
get new JSON doc from server
send JSON doc to CouchDB
end loop
In Python, you could use requests to send the documents to CouchDB and probably to get the documents from the server as well (if it is using an HTTP API).
You might want to checkout the pycouchdb module for python3. I've used it myself to upload lots of JSON objects into couchdb instance. My project does pretty much the same as you describe so you can take a look at my project Pyro at Github for details.
My class looks like that:
class MyCouch:
""" COMMUNICATES WITH COUCHDB SERVER """
def __init__(self, server, port, user, password, database):
# ESTABLISHING CONNECTION
self.server = pycouchdb.Server("http://" + user + ":" + password + "#" + server + ":" + port + "/")
self.db = self.server.database(database)
def check_doc_rev(self, doc_id):
# CHECKS REVISION OF SUPPLIED DOCUMENT
try:
rev = self.db.get(doc_id)
return rev["_rev"]
except Exception as inst:
return -1
def update(self, all_computers):
# UPDATES DATABASE WITH JSON STRING
try:
result = self.db.save_bulk( all_computers, transaction=False )
sys.stdout.write( " Updating database" )
sys.stdout.flush()
return result
except Exception as ex:
sys.stdout.write( "Updating database" )
sys.stdout.write( "Exception: " )
print( ex )
sys.stdout.flush()
return None
Let me know in case of any questions - I will be more than glad to help if you will find some of my code usable.

accessing a variable outside a Requesthandler

i'm using CSS3 accordion effect, and i want to detect if a hacker will
make a script to make a parallel request; ie:
i've a login form and a registration form in the same page, but only
one is visible because there is a CSS3: to access the page, the user
agent must be HTML5 compatible.
the tip i use is:
class Register(tornado.web.RequestHandler):
def post(self):
tt = self.get_argument("_xsrf") + str(time.time())
rtime = float(tt.replace(self.get_argument("_xsrf"), ""))
print rtime
class LoginHandler(BaseHandler):
def post(self):
tt = self.get_argument("_xsrf") + str(time.time())
ltime = float(tt.replace(self.get_argument("_xsrf"), ""))
print ltime
i've used the xsrf variable because it's unique for every user, to
avoid making the server think that the request is coming from the same
machine.
now what i want: how to make the difference between time values:
abs(ltime - rtime) ; mean, how do i access to rtime outside the class,
i just know how to access the value outside the method, i want to make
this operation to detect if the value is small, then the user is using
a script to make a parallel request to kill the server!
in other words (for general python users)
if i have:
class Product:
def info(self):
self.price = 1000
def show(self):
print self.price
>>> car = Product()
>>> car.info()
>>> car.show()
1000
but what if i've another
class User:
pass
then how do make a method that prints me the self.price, i've tried
inheritance, but got error: AttributeError: User instance has no
attribute 'price', so only methods are passed, not attributs?
It sounds like you need to understand Model objects and patterns that use persistant storage of data. tornado.web.RequestHandler and any object that you subclass from it only exists for the duration of your request. From when the URL is received on the server to when data is sent back to the browser via a self.write() or self.finish().
I would recommend you look at some of the Django or Flask tutorials for some basic ideas of how to build a MVC application in Python (There is no Tornado Tutorials that cover this that I know of).