i'm using CSS3 accordion effect, and i want to detect if a hacker will
make a script to make a parallel request; ie:
i've a login form and a registration form in the same page, but only
one is visible because there is a CSS3: to access the page, the user
agent must be HTML5 compatible.
the tip i use is:
class Register(tornado.web.RequestHandler):
def post(self):
tt = self.get_argument("_xsrf") + str(time.time())
rtime = float(tt.replace(self.get_argument("_xsrf"), ""))
print rtime
class LoginHandler(BaseHandler):
def post(self):
tt = self.get_argument("_xsrf") + str(time.time())
ltime = float(tt.replace(self.get_argument("_xsrf"), ""))
print ltime
i've used the xsrf variable because it's unique for every user, to
avoid making the server think that the request is coming from the same
machine.
now what i want: how to make the difference between time values:
abs(ltime - rtime) ; mean, how do i access to rtime outside the class,
i just know how to access the value outside the method, i want to make
this operation to detect if the value is small, then the user is using
a script to make a parallel request to kill the server!
in other words (for general python users)
if i have:
class Product:
def info(self):
self.price = 1000
def show(self):
print self.price
>>> car = Product()
>>> car.info()
>>> car.show()
1000
but what if i've another
class User:
pass
then how do make a method that prints me the self.price, i've tried
inheritance, but got error: AttributeError: User instance has no
attribute 'price', so only methods are passed, not attributs?
It sounds like you need to understand Model objects and patterns that use persistant storage of data. tornado.web.RequestHandler and any object that you subclass from it only exists for the duration of your request. From when the URL is received on the server to when data is sent back to the browser via a self.write() or self.finish().
I would recommend you look at some of the Django or Flask tutorials for some basic ideas of how to build a MVC application in Python (There is no Tornado Tutorials that cover this that I know of).
Related
I have a Django application. Sometimes in production I get an error when uploading data that one of the values is too long. It would be very helpful for debugging if I could see which value was the one that went over the limit. Can I configure this somehow? I'm using MySQL.
It would also be nice if I could enable/disable this on a per-model or column basis so that I don't leak user data to error logs.
When creating model instances from outside sources, one must take care to validate the input or have other guarantees that this data cannot violate constraints.
When not calling at least full_clean() on the model, but directly calling save, one bypasses Django's validators and will only get alerted to the problem by the database driver at which point it's harder to obtain diagnostics:
class JsonImportManager(models.Manager):
def import(self, json_string: str) -> int:
data_list = json.loads(json_string) # list of objects => list of dicts
failed = 0
for data in data_list:
obj = self.model(**data)
try:
obj.full_clean()
except ValidationError as e:
print(e.message_dict) # or use better formatting function
failed += 1
else:
obj.save()
return failed
This is of course very simple, but it's a good boilerplate to get started with.
We are working on a Top-Down-RPG-like Multiplayer game for learning purposes (and fun!) with some friends. We already have some Entities in the Game and Inputs are working, but the network implementation gives us headache :D
The Issues
When trying to convert with dict some values will still contain the pygame.Surface, which I dont want to transfer and it causes errors when trying to jsonfy them. Other objects I would like to transfer in a simplyfied way like Rectangle cannot be converted automatically.
Already functional
Client-Server connection
Transfering JSON objects in both directions
Async networking and synchronized putting into a Queue
Situation
A new player connects to the server and wants to get the current game state with all objects.
Data-Structure
We use a "Entity-Component" based architecture, so we separated the game logic very strictly into "systems", while the data is stored in the "components" of each Entity. The Entity is a very simple container and has nothing more than a ID and a list of components. Example Entity (shorten for better readability):
Entity
|-- Component (Moveable)
|-- Component (Graphic)
| |- complex datatypes like pygame.SURFACE
| `- (...)
`- Component (Inventory)
We tried different approaches, but all seems not to fit very well or feel "hacky".
pickle
Very Python near, so not easy to implement other clients in future. And I´ve read about some security risks when creating items from network in this dynamic way how pickle it offers. It does not even solve the Surface/Rectangle issue.
__dict__
Still contains the reference to the old objects, so a "cleanup" or "filter" for unwanted datatypes happens also in the origin. A deepcopy throws Exception.
...\Python\Python36\lib\copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle pygame.Surface objects
Show some code
The method of the "EnitityManager" Class which should generate the Snapshot of all Entities, including their components. This Snapshot should be converted to JSON without any errors - and if possible without much configuration in this core-class.
class EnitityManager:
def generate_world_snapshot(self):
""" Returns a dictionary with all Entities and their components to send
this to the client. This function will probably generate a lot of data,
but, its to send the whole current game state when a new player
connects or when a complete refresh is required """
# It should be possible to add more objects to the snapshot, so we
# create our own Snapshot-Datastructure
result = {'entities': {}}
entities = self.get_all_entities()
for e in entities:
result['entities'][e.id] = deepcopy(e.__dict__)
# Components are Objects, but dictionary is required for transfer
cmp_obj_list = result['entities'][e.id]['components']
# Empty the current list of components, its going to be filled with
# dictionaries of each cmp which are cleaned for the dump, because
# of the errors directly coverting the whole datastructure to JSON
result['entities'][e.id]['components'] = {}
for cmp in cmp_obj_list:
cmp_copy = deepcopy(cmp)
cmp_dict = cmp_copy.__dict__
# Only list, dict, int, str, float and None will stay, while
# other Types are being simply deleted including their key
# Lists and directories will be cleaned ob recursive as well
cmp_dict = self.clean_complex_recursive(cmp_dict)
result['entities'][e.id]['components'][type(cmp_copy).__name__] \
= cmp_dict
logging.debug("EntityMgr: Entity#3: %s" % result['entities'][3])
return result
Expectation and actual results
We can find a way to manually override elements which we dont want. But as the list of components will increase we have to put all the filter logic into this core class, which should not contain any components specializations.
Do we really have to put all the logic into the EntityManager for filtering the right objects? This does not feel good, as I would like to have all convertion to JSON done without any hardcoded configuration.
How to convert all this complex data in a most generic approach?
Thanks for reading so far and thank you very much for your help in advance!
Interesting articles which we were already working threw and maybe helpful for others with similar issues
https://gafferongames.com/post/what_every_programmer_needs_to_know_about_game_networking/
http://code.activestate.com/recipes/408859/
https://docs.python.org/3/library/pickle.html
UPDATE: Solution - thx 2 sloth
We used a combination of the following architecture, which works really great so far and is also good to maintain!
Entity Manager now calls the get_state() function of the entity.
class EntitiyManager:
def generate_world_snapshot(self):
""" Returns a dictionary with all Entities and their components to send
this to the client. This function will probably generate a lot of data,
but, its to send the whole current game state when a new player
connects or when a complete refresh is required """
# It should be possible to add more objects to the snapshot, so we
# create our own Snapshot-Datastructure
result = {'entities': {}}
entities = self.get_all_entities()
for e in entities:
result['entities'][e.id] = e.get_state()
return result
The Entity has only some basic attributes to add to the state and forwards the get_state() call to all the Components:
class Entity:
def get_state(self):
state = {'name': self.name, 'id': self.id, 'components': {}}
for cmp in self.components:
state['components'][type(cmp).__name__] = cmp.get_state()
return state
The components itself now inherit their get_state() method from their new superclass components, which simply cares about all simple datatypes:
class Component:
def __init__(self):
logging.debug('generic component created')
def get_state(self):
state = {}
for attr, value in self.__dict__.items():
if value is None or isinstance(value, (str, int, float, bool)):
state[attr] = value
elif isinstance(value, (list, dict)):
# logging.warn("Generating state: not supporting lists yet")
pass
return state
class GraphicComponent(Component):
# (...)
Now every developer has the opportunity to overlay this function to create a more detailed get_state() function for complex types directly in the Component Classes (like Graphic, Movement, Inventory, etc.) if it is required to safe the state in a more accurate way - which is a huge thing for maintaining the code in future, to have these code pieces in one Class.
Next step is to implement the static method for creating the items from the state in the same Class. This makes this working really smooth.
Thank you so much sloth for your help.
Do we really have to put all the logic into the EntityManager for filtering the right objects?
No, you should use polymorphism.
You need a way to represent your game state in a form that can be shared between different systems; so maybe give your components a method that will return all of their state, and a factory method that allows you create the component instances out of that very state.
(Python already has the __repr__ magic method, but you don't have to use it)
So instead of doing all the filtering in the entity manager, just let him call this new method on all components and let each component decide that the result will look like.
Something like this:
...
result = {'entities': {}}
entities = self.get_all_entities()
for e in entities:
result['entities'][e.id] = {'components': {}}
for cmp in e.components:
result['entities'][e.id]['components'][type(cmp).__name__] = cmp.get_state()
...
And a component could implement it like this:
class GraphicComponent:
def __init__(self, pos=...):
self.image = ...
self.rect = ...
self.whatever = ...
def get_state(self):
return { 'pos_x': self.rect.x, 'pos_y': self.rect.y, 'image': 'name_of_image.jpg' }
#staticmethod
def from_state(state):
return GraphicComponent(pos=(state.pos_x, state.pos_y), ...)
And a client's EntityManager that recieves the state from the server would iterate for the component list of each entity and call from_state to create the instances.
Can I map Scala functions to JSON; or perhaps via a different way than JSON?
I know I can map data types, which is fine. But I'd like to create a function, map it to JSON send it via a REST method to another server, then add that function to a list of functions in another application and apply it.
For instance:
def apply(f: Int => String, v: Int) = f(v)
I want to make a list of functions that can be applied within an application, over different physical locations. Now I want to add and remove functions to the list. By means of REST calls.
Let's assume I understand security problems...
ps.. If you downvote, you might as well have the decency to explain why
If I understand correctly, you want to be able to send Scala code to be executed on different physical machines. I can think of a few different ways of achieving that
Using tools for distributed computing e.g. Spark. You can set up Spark clusters on different machines and then select to which cluster you want to submit Spark jobs. There are a lot of other tools for distributed computing that might also be worth looking into.
Pass scala code as a string and compile it either within your server side code (here's an example) or by invoking scalac as an external process.
Send the functions as byte code and execute the byte code on the remote machine.
If it fits with what you want to do, I'd recommend option #1.
Make sure that you can really trust the code that you want to execute, to not expose yourself to malicious code.
The answer is you can't do this, and even if you could you shouldn't!
You should never, never, never write a REST API that allows the client to execute arbitrary code in your application.
What you can do is create a number of named operations that can be executed. The client can then pass the name of the operation which the server can look up in a Map[String, <function>] and execute the result.
As mentioned in my comment, here is an example of how to turn a case class into JSON. Things to note: don't question the implicit val format line (it's magic); each case class requires a companion object in order to work; if you have Optional fields in your case class and define them as None when turning it into JSON, those fields will be ignored (if you define them as Some(whatever), they will look like any other field). If you don't know much about Scala Play, ignore the extra stuff for now - this is just inside the default Controller you're given when you make a new Project in IntelliJ.
package controllers
import javax.inject._
import play.api.libs.json.{Json, OFormat}
import play.api.mvc._
import scala.concurrent.Future
#Singleton
class HomeController #Inject()(cc: ControllerComponents) extends AbstractController(cc) {
case class Attributes(heightInCM: Int, weightInKG: Int, eyeColour: String)
object Attributes {
implicit val format: OFormat[Attributes] = Json.format[Attributes]
}
case class Person(name: String, age: Int, attributes: Attributes)
object Person {
implicit val format: OFormat[Person] = Json.format[Person]
}
def index: Action[AnyContent] = Action.async {
val newPerson = Person("James", 24, Attributes(192, 83, "green"))
Future.successful(Ok(Json.toJson(newPerson)))
}
}
When you run this app with sbt run and hit localhost:9000 through a browser, the output you see on-screen is below (formatted for better reading). This is also an example of how you might send JSON as a response to a GET request. It isn't the cleanest example but it works.
{
"name":"James",
"age":24,
"attributes":
{
"heightInCM":187,
"weightInKG":83,
"eyeColour":"green"
}
}
Once more though, I would never recommend passing actual functions between services. If you insist though, maybe store them as a String in a Case Class and turn it into JSON like this. Even if you are okay with passing functions around, it might even be a good exercise to practice security by validating the functions you receive to make sure they're not malicious.
I also have no idea how you'll convert them back into a function either - maybe write the String you receive to a *.scala file and try to run them with a Bash script? Idk. I'll let you figure that one out.
def copy_file(from_file,to_file):
content = open(from_file).read()
target = open(to_file,'w').write(content)
print open(to_file).read()
def user_input(f1):
f1 = raw_input("Enter the source file : ")
user_input(f1)
user_input(f2)
copy_file(user_input(f1),user_input(f2))
What is the mistake in this ? I tried it with argv and it was working.
You're not calling the function user_input (by using ()). (fixed in question by OP).
Also, you need to return a string from user_input. currently you're trying to set a variable f1 which is local to the function user_input. While this is possible using global - I do not recommend this (this beats keeping your code DRY).
It's possible to do something similar with objects by changing their states. String is an object - but since strings are immutable, and you can't have the function change their state - this approach of expecting a function to change the string it's given is also doomed to fail.
def user_input():
return raw_input("Enter the source file :").strip()
copy_file(user_input(),user_input())
You can see user_input does very little, it's actually redundant if you assume user input is valid.
I am new to CouchDB. I need to get 60 or more JSON files in a minute from a server.
I have to upload these JSON files to CouchDB individually as soon as I receive them.
I installed CouchDB on my Linux machine.
I hope some one can help me with my requirement.
If possible can someone help me with pseudo code.
My Idea:
Is to write a python script for uploading all JSON files to CouchDB.
Each and every JSON file must be each document and the data present in
JSON must be inserted same into CouchDB
(the specified format with values in a file).
Note:
These JSON files are Transactional, every second 1 file is generated
so I need to read the file upload as same format into CouchDB on
successful uploading archive the file into local system of different folder.
python program to parse the json and insert into CouchDb:
import sys
import glob
import errno,time,os
import couchdb,simplejson
import json
from pprint import pprint
couch = couchdb.Server() # Assuming localhost:5984
#couch.resource.credentials = (USERNAME, PASSWORD)
# If your CouchDB server is running elsewhere, set it up like this:
couch = couchdb.Server('http://localhost:5984/')
db = couch['mydb']
path = 'C:/Users/Desktop/CouchDB_Python/Json_files/*.json'
#dirPath = 'C:/Users/VijayKumar/Desktop/CouchDB_Python'
files = glob.glob(path)
for file1 in files:
#dirs = os.listdir( dirPath )
file2 = glob.glob(file1)
for name in file2: # 'file' is a builtin type, 'name' is a less-ambiguous variable name.
try:
with open(name) as f: # No need to specify 'r': this is the default.
#sys.stdout.write(f.read())
json_data=f
data = json.load(json_data)
db.save(data)
pprint(data)
json_data.close()
#time.sleep(2)
except IOError as exc:
if exc.errno != errno.EISDIR: # Do not fail if a directory is found, just ignore it.
raise # Propagate other kinds of IOError.
I would use CouchDB bulk API, even though you have specified that you need to send them to db one by one. For example, by implementing a simple queue that gets sent out every say 5 - 10 seconds via a bulk doc call will greatly increase performance of your application.
There is obviously a quirk in that and that is you need to know the IDs of the docs that you want to get from the DB. But for the PUTs it is perfect. (it is not entirely true, you can get ranges of docs using bulk operation if the IDs you are using for your docs can be sorted nicely).
From my experience working with CouchDB, I have a hunch that you are dealing with Transactional documents in order to compile them into some sort of sum result and act on that data accordingly (maybe creating next transactional doc in series). For that you can rely on CouchDB by using 'reduce' functions on the views you create. It takes a little practice to get reduce function working properly and is highly dependent on what it is you actually what to achieve and what data you are prepared to emit by the view so I can't really provide you with more detail on that.
So in the end the app logic would go something like that:
get _design/someDesign/_view/yourReducedView
calculate new transaction
add transaction to queue
onTimeout
send all in transaction queue
If I got that first part of why you are using transactional docs wrong all that would really change is the part where you getting those transactional docs in my app logic.
Also, before writing your own 'reduce' function, have a look at buil-in ones (they are alot faster then anything outside of db engine can do)
http://wiki.apache.org/couchdb/HTTP_Bulk_Document_API
EDIT:
Since you are starting, I strongly recommend to have a look at CouchDB Definitive Guide.
NOTE FOR LATER:
Here is one hidden stone (well maybe not so much a hidden stone but not an obvious thing to look out for for the new-comer in any case). When you write reduce function make sure that it does not produce too much output for the query without boundaries. This will extremely slow down the entire view even when you provide reduce=false when getting stuff from it.
So you need to get JSON documents from a server and send them to CouchDB as you receive them. A Python script would work fine. Here is some pseudo-code:
loop (until no more docs)
get new JSON doc from server
send JSON doc to CouchDB
end loop
In Python, you could use requests to send the documents to CouchDB and probably to get the documents from the server as well (if it is using an HTTP API).
You might want to checkout the pycouchdb module for python3. I've used it myself to upload lots of JSON objects into couchdb instance. My project does pretty much the same as you describe so you can take a look at my project Pyro at Github for details.
My class looks like that:
class MyCouch:
""" COMMUNICATES WITH COUCHDB SERVER """
def __init__(self, server, port, user, password, database):
# ESTABLISHING CONNECTION
self.server = pycouchdb.Server("http://" + user + ":" + password + "#" + server + ":" + port + "/")
self.db = self.server.database(database)
def check_doc_rev(self, doc_id):
# CHECKS REVISION OF SUPPLIED DOCUMENT
try:
rev = self.db.get(doc_id)
return rev["_rev"]
except Exception as inst:
return -1
def update(self, all_computers):
# UPDATES DATABASE WITH JSON STRING
try:
result = self.db.save_bulk( all_computers, transaction=False )
sys.stdout.write( " Updating database" )
sys.stdout.flush()
return result
except Exception as ex:
sys.stdout.write( "Updating database" )
sys.stdout.write( "Exception: " )
print( ex )
sys.stdout.flush()
return None
Let me know in case of any questions - I will be more than glad to help if you will find some of my code usable.