import socket
def functions():
print ("hello")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('192.168.137.1', 20000)
sock.bind(server_address)
sock.listen(1)
conn, addr = sock.accept()
print ('Connected by', addr)
sock.listen(1)
conn.sendall(b"Welcome to the server")
My question is how to send a function to the client,
I know that conn.sendall(b"Welcome to the server") will data to the client.
Which can be decoded.
I would like to know how to send a function to a client like
conn.sendall(function()) - this does not work
Also I would like to know the function that would allow the client to receive the function I am sending
I have looked on the python website for a function that could do this but I have not found one.
The functionality requested by you is principally impossible unless explicitly coded on client side. If this were possible, one could write a virus which easily spreads into any remote machine. Instead, this is client right responsibility to decode incoming data in any manner.
Considering a case client really wants to receive a code to execute, the issue is that code shall be represented in a form which, at the same time,
is detached from server context and its specifics, and can be serialized and executed at any place
allows secure execution in a kind of sandbox, because a very rare client will allow arbitrary server code to do anything at the client side.
The latter is extremely complex topic; you can read any WWW browser security history - most of closed vulnerabilities are of issues in such sandboxing.
(There are environments when such execution is allowed and desired; e.g. Erlang cookie-based peering cluster. But, in such cluster, side B is also allowed to execute anything at side A.)
You should start with searching an execution environment (high-level virtual machine) which conforms to your needs in functionality and security. For Python, you'd look at multiprocessing module: its implementation of worker pools doesn't pass the code itself, but simplifies passing data for execution requests. Also, passing of arbitrary Python data without functions is covered with marshal and pickle modules.
Related
I have implemented a complex csv import script in Golang.
I use a Workerpool implementation for it. Inside that workerpool, workers run through 1000s of small csv files, categorizing, tagging and branding the products.
And they all write to the same database table. So far so good.
The problem i'm facing is, that if i chose more than 2 workers, the process crashes with the following message randomly
The workflow is
foreach (csv) {
workerPool.submit(csv)
}
func worker(csv) {
foreach (line) {
import(line)
}
}
import(line) {
product = get(line)
product.category = determine_category(product)
product.brand = determine_brand(product)
save(brand)
product.tags = determine_tags(product)
//and after all
save(product)
}
I tried to wrap the save() calls in transactions, but it didn't help.
Now i have the following questions:
Is MySQL suited to save concurrently to 1 table?
If transactions are need to accomplish this, where should they be set?
Is the Go SQL Driver (where the error ALWAYS happens in packets.go:1102) suited to do this ?
Could anyone help me (maybe by hiring for a few hours)?
I'm completely stuck. I can also share the sourcecode if that helps. But I first wanted to know i you guess that it's rather my code or a general issue.
Open a new db connection in each goroutine (or thread, for languages that use threads).
MySQL's protocol is stateful, which means if multiple goroutines attempt to use the same connection, the requests and responses get very confused.
You would have the same problem trying to share any other kind of stateful protocol connection between goroutines.
For example ftp is also a stateful protocol, and that may be easier to understand. A client goroutine might send a message like "get file x" and the response should be a series of messages containing the content of that file. If another goroutine tries to use the same connection while that request/response is inprogress, both clients will be confused. The second goroutine will read packets that belong to a file it didn't request. The first goroutine who requested the file will find some packets it was expecting have already been read.
Similarly, MySQL's protocol does not support multiple client goroutines sharing a single connection.
I'm building an application that stores files into the FIWARE Object Storage. I don't quite understand what is the correct way of storing files into the storage.
The code python code snippet below taken from the Object Storage - User and Programmers Guide shows 2 ways of doing it:
def store_text(token, auth, container_name, object_name, object_text):
headers = {"X-Auth-Token": token}
# 1. version
#body = '{"mimetype":"text/plain", "metadata":{}, "value" : "' + object_text + '"}'
# 2. version
body = object_text
url = auth + "/" + container_name + "/" + object_name
return swift_request('PUT', url, headers, body)
The 1. version confuses me, because when I first looked at the only Node.js module (repo: fiware-object-storage) that works with Object Storage, it seemed to use 1. version. As the module was making calls to the old (v.1.1) API version instead of the presumably newest (v.2.0), referencing to the python example, not sure if that is an outdated version of doing it or not.
As I played more with the module, realised it didn't work and the code for it was a total mess. So I forked the project and quickly understood that I will need rewrite it form the ground up, taking the above mention python example from the usage guide as an reference. Link to my repo.
As of writing this the only methods that aren't implement is the object storage (PUT) and object fetching (GET).
Had some addition questions about the Object Storage which I sent to fiware-lab-help#lists.fiware.org, but haven't heard anything back so asking them here.
Haven't got much experience with writing API libraries. Should I need to worry about auth token expiring? I presume it is not needed to make a new authentication, every time we interact with storage. The authentication should happen once when server is starting-up (we create a instance) and it internally keeps it. Should I implement some kind of mechanism that refreshes the token?
Does the tenant id change? From the quote below is presume that getting a tenant I just a one time deal, then later you can use it in the config to make less authentication calls.
A valid token is required to access an object store. This section
describes how to get a valid token assuming an identity management
system compatible with OpenStack Keystone is being used. If the
username, password and tenant details are known, only step 3 is
required. source
During the authentication when fetching tenants how should I select the "right" one? For now i'm just taking the first one similar as the example code does.
Is it true that a object storage container belongs to only a single region?
Use only what you call version 2. Ignore your version 1. It is commented out in the example. It should be removed from the documentation.
(1) The token will be valid for some period of time. This could be an hour or a day, depending on the setup. This period of time should be specified in the token that is returned by the authentication service. The token needs to be periodically refreshed.
(2) The tenant id does not change.
(3) Typically only one tenant id is returned. It is possible, however, that you were assigned more than one id, in which case you have to pick which one you are currently using. Containers typically belong to a single tenant and are not shared between tenants.
(4) Containers are typically limited to a single region. This may change in the future when multi-region support for a container is added to Swift.
Solved my troubles and created the NPM module that works with the FIWARE Object Storage: https://github.com/renarsvilnis/fiware-object-storage-ge
I see that AWS posts a json file with all their IP ranges here (Actual JSON HERE)
I was thinking of using this json file to check against every incoming connection in my node app but firstly I was wondering if it would be far too much overhead to loop through it for every request?
Secondly, I wasn't sure exactly how to go about this, as many IP ranges are formatted differently eg.
43.250.192.0/24
46.51.128.0/18
27.0.0.0/22
I'm not too sure what them suffix's mean.
Has anyone don something similar?
Your first concern is correct - it's a lot of overhead to loop through Amazon's IPs for each request. This should be handled at the firewall.
Nevertheless, the ip_prefix field Amazon is providing can be used to ensure valid IP addresses exist within that subnet. The node-ip module can help with this. It has a cidrSubnet function that can be used to test a prefix against a user's IP. See the below coffeescript.
ip = require 'node-ip'
amazonIPs = require 'amazonIPs.json'
someUsersIP = '192.168.1.190'
for prefix in amazonIPs.prefix
if ip.cidrSubnet(prefix).contains(someUsersIP)
console.log "#{someUsersIP} is within the #{prefix} range"
I've a MySql database hosted in my web site, with a table named UsrLic
Where any one wants to buy my software must register and enter his/her Generated Machine Key (+ username, email ...etc).
So my question is:
I want to automate this process from my software, how this Process will be?
Should I connect and update my database directly from my software ( and this means I must save all my database connection parameters in it * my database username , password , server * and then use ADO or MyDac to connect to this database ? and if yes how secure is this process ?
or any other suggestions .
I recommend creating an API on your web site in PHP and calling the API from Delphi.
That way, the database is only available to your web server and not to the client application, ever. In fact, you should run your database on localhost or with a private IP so that only machines on the same physical network can reach it.
I have implemented this and am implementing it again as we speak.
PHP
Create a new file named register_config.php. In this file, setup your MySQL connection information.
Create a file named register.php. In this file, put your registration functions. From this file, include 'register_config.php'. You will pass parameters to the functions you create here, and they will do the reading and writing to your database.
Create a file named register_api.php. From this file, include 'register.php'. Here, you will process POST or GET variables that are sent from your client application, call functions in register.php, and return results back to the client, all via HTTP.
You will have to research connecting to and querying a MySQL database. The W3Schools tutorials will have you doing this very quickly.
For example:
Your Delphi program calls https://mysite/register_api.php with Post() and sends the following values:
name=Marcus
email=marcus#gmail.com
Here's how the beginning of register_api.php might look:
// Our actual database and registration functions are in this library
include 'register.php';
// These are the name value pairs sent via POST from the client
$name = $_POST['name'];
$email = $_POST['email'];
// Sanitize and validate the input here...
// Register them in the DB by calling my function in register.php
if registerBuyer($name, $email) {
// Let them know we succeeded
echo "OK";
} else {
// Let them know we failed
echo "ERROR";
}
Delphi
Use Indy's TIdHTTP component and its Post() or Get() method to post data to register_api.php on the website.
You will get the response back in text from your API.
Keep it simple.
Security
All validation should be done on the server (API). The server must be the gatekeeper.
Sanitize all input to the API from the user (the client) before you call any functions, especially queries.
If you are using shared web hosting, make sure that register.php and register_config.php are not world readable.
If you are passing sensitive information, and it sounds like you are, you should call the registration API function from Delphi over HTTPS. HTTPS provides end to end protection so that nobody can sniff the data being sent off the wire.
Simply hookup a TIdSSLIOHandlerSocketOpenSSL component to your TIdHTTP component, and you're good to go, minus any certificate verification.
Use the SSL component's OnVerifyPeer event to write your own certificate verification method. This is important. If you don't verify the server side certificate, other sites can impersonate you with DNS poisoning and collect the data from your users instead of you. Though this is important, don't let this hold you up since it requires a bit more understanding. Add this in a future version.
Why don't you use e.g. share*it? They also handle the buying process (i don't see how you would do this for yourself..) and let you create a reg key through a delphi app.
Is there any way to "subscribe" from GWT to JSON objects stream and listen to incoming events on keep-alive connection, without trying to fetch them all at once? I believe that the buzzword-du-jour for this technology is "Comet".
Let's assume that I have HTTP service which opens keep-alive connection and put JSON objects with incoming stock quotes there in real time:
{"symbol": "AAPL", "bid": "88.84", "ask":"88.86"}
{"symbol": "AAPL", "bid": "88.85", "ask":"88.87"}
{"symbol": "IBM", "bid": "87.48", "ask":"87.49"}
{"symbol": "GOOG", "bid": "305.64", "ask":"305.67"}
...
I need to listen to this events and update GWT components (tables, labels) in realtime. Any ideas how to do it?
There is a GWT Comet Module for StreamHub:
http://code.google.com/p/gwt-comet-streamhub/
StreamHub is a Comet server with a free community edition. There is an example of it in action here.
You'll need to download the StreamHub Comet server and create a new SubscriptionListener, use the StockDemo example as a starting point, then create a new JsonPayload to stream the data:
Payload payload = new JsonPayload("AAPL");
payload.addField("bid", "88.84");
payload.addField("ask", "88.86");
server.publish("AAPL", payload);
...
Download the JAR from the google code site, add it to your GWT projects classpath and add the include to your GWT module:
<inherits name="com.google.gwt.json.JSON" />
<inherits name="com.streamhub.StreamHubGWTAdapter" />
Connect and subscribe from your GWT code:
StreamHubGWTAdapter streamhub = new StreamHubGWTAdapter();
streamhub.connect("http://localhost:7979/");
StreamHubGWTUpdateListener listener = new StockListener();
streamhub.subscribe("AAPL", listener);
streamhub.subscribe("IBM", listener);
streamhub.subscribe("GOOG", listener);
...
Then process the updates how you like in the update listener (also in the GWT code):
public class StockListener implements StreamHubGWTUpdateListener {
public void onUpdate(String topic, JSONObject update) {
String bid = ((JSONString)update.get("bid")).stringValue();
String ask = ((JSONString)update.get("ask")).stringValue();
String symbol = topic;
...
}
}
Don't forget to include streamhub-min.js in your GWT projects main HTML page.
I have used this technique in a couple of projects, though it does have it's problems. I should note that I have only done this specifically through GWT-RPC, but the principle is the same for whatever mechanism you are using to handle data. Depending on what exactly you are doing, there might not be much need to over complicate things.
First off, on the client side, I do not believe that GWT can properly support any sort of streaming data. The connection has to close before the client can actually process the data. What this means from a server-push standpoint is that your client will connect to the server and block until data is available at which point it will return. Whatever code executes on the completed connection should immediately re-open a new connection with the server to wait for more data.
From the server side of things, you simply drop into a wait cycle (the java concurrent package is particularly handy for this with blocks and timeouts), until new data is available. At that point in time, the server can return a package of data down to the client which will update accordingly. There are a bunch of considerations depending on what your data flow is like, but here are a few to think about:
Is a client getting every single update important? If so, then the server needs to cache any potential events between the time the client gets some data and then reconnects.
Are there going to be gobs of updates? If this is the case, it might be wiser to package up a number of updates and push down chunks at a time every several seconds rather than having the client get one update at a time.
The server will likely need a way to detect if a client has gone away to avoid piling up huge amounts of cached packages for that client.
I found there were two problems with the server push approach. With lots of clients, this means lots of open connections on the web server. Depending on the web server in question, this could mean lots of threads being created and held open. The second has to do with the typical browser's limit of 2 requests per domain. If you are able to serve your images, css and other static content fro second level domains, this problem can be mitigated.
there is indeed a cometd-like library for gwt - http://code.google.com/p/gwteventservice/
But i ve not personally used it, so cant really vouch for whether its good or not, but the doco seems quite good. worth a try.
Theres a few other ones i ve seen, like gwt-rocket's cometd library.
Some preliminary ideas for Comet implementation for GWT can be found here... though I wonder whether there is something more mature.
Also, some insight on GWT/Comet integration is available there, using even more cutting-and-bleeding edge technology: "Jetty Continuations". Worth taking a look.
Here you can find a description (with some source samples) of how to do this for IBM WebSphere Application Server. Shouldn't be too different with Jetty or any other Comet-enabled J2EE server. Briefly, the idea is: encode your Java object to JSON string via GWT RPC, then using cometd send it to the client, where it is received by Dojo, which triggers your JSNI code, which calls your widget methods, where you deserialize the object again using GWT RPC. Voila! :)
My experience with this setup is positive, there were no problems with it except for the security questions. It is not really clear how to implement security for comet in this case... Seems that Comet update servlets should have different URLs and then J2EE security can be applied.
The JBoss Errai project has a message bus that provides bi-directional messaging that provides a good alternative to cometd.
We are using Atmosphere Framewrok(http://async-io.org/) for ServerPush/Comet in GWT aplication.
On a client side Framework has GWT integration that is pretty straightforward. On a server side it uses plain Servlet.
We are currently using it in production with 1000+ concurent users in clustered environment. We had some problems on the way that had to be solved by modifying Atmosphere source. Also the documentation is really thin.
Framework is free to use.