I'm new to python programming and I'm trying to analyze the pending blocks of the BSC network. My program checks for pending blocks (event) and does something. The point is that a lot of events are happening while the loop event is active and my process is so slow to keep all new data being analised on real time.
If i remove the function hash_analise() and print all events is ok, the program is receiving data faster and i can print all hashs in realtime but when i call this function my program became slower.
I tried with threading but i need to syncronize all data from events and wait with thread.join() but when i wait this thread make slower than before.
Is any way to run this faster?
Thanks for help, code without thread:
def hash_analise(hash):
try:
hash_analise = web3.eth.get_transaction(hash)
print_hash = Web3.toJSON(hash_analise)
print("IMPRIME HASH1:", print_hash)
if TOKEN_LOWER_CORRIGIDO in print_hash:
print("\nCONTÉM A STRING ESCOLHIDA")
except:
print("TRANSAÇÃO NÃO LOCALIZADA")
if __name__ == "__main__":
tx_filter = web3.eth.filter('pending')
count = 0
while True:
for event in tx_filter.get_new_entries():
evento = Web3.toJSON(event)
txnhash = evento[1:67]
hash_analise(txnhash)
count += 1
print("Main", count)
Related
The bottom line is to listen to a specific user and transfer his tweets with minimal delay to the telegram bot.
To implement this task, I use the Twiipi library in which, as I understand it, there are 2 most important types of authentication for me:
On behalf of the user - 900 requests in 15 minutes (i.e. 1 request in 1 second)
On behalf of the application - 300 requests in 15 minutes (i.e. 1 request in 3 seconds)
I authenticate my script on behalf of the user using the OAuth1UserHandler function (listed below).
But despite this, the delay in speed appears after 7.5 minutes of work, given that the script for interacting with twitter resources runs once every 1.5 seconds. That is one way or another Authentication of my script takes place on behalf of the application. But despite this, I just made a second bot that starts after 7 minutes of the previous one, thereby updating the time for the previous one to work. My main problem is that the performance of parsing tweet data drops (acceptable speed is a maximum of 1.5 seconds, delays sometimes last 13 seconds).
Please tell me what I'm doing wrong or how can I solve my problem better?
code of one of 2 bots
import tweepy
import datetime
import time
from notifiers import get_notifier
from re import sub
TOKEN = 'telegram token here'
USER_ID = telegram user id here
ADMIN_ID = teelgram my id here for check work bots
auth = tweepy.OAuth1UserHandler(
consumer_key="consumer key",
consumer_secret="consumer secret",
access_token="access token",
access_token_secret="access token secret",
)
api = tweepy.API(auth)
# auth.set_access_token(access_token, access_token_secret)
print("############### Tokens connected ###############")
user = 'whose username we will listen to'
username_object = api.get_user(screen_name=user)
def listening_to_the_user():
print(' We start listening to the user...')
print(' When a user posts a tweet, you will hear an audio notification...')
seconds_left = 60*10
while seconds_left >= 0:
for i in api.user_timeline(user_id=username_object.id, screen_name=user, count=1):
tweet_post = i.created_at
tweet_text = sub(r"https?://t.co[^,\s]+,?", "", i.text)
tweet_time_information = [tweet_post.day, tweet_post.month, tweet_post.year, tweet_post.hour, tweet_post.minute]
now = datetime.datetime.now()
current_time = [now.day, now.month, now.year, now.hour, now.minute]
if tweet_time_information == current_time:
telegram = get_notifier('telegram')
notification_about_tweet = f'️{user}⬇️'
notification_about_tweet_time = f'{tweet_post.day}.{tweet_post.month}.{tweet_post.year}, {tweet_post.hour}:{tweet_post.minute}.{tweet_post.second}'
notification_about_current_time = f'{now.day}.{now.month}.{now.year}, {now.hour}:{now.minute}.{now.second}'
telegram.notify(token=TOKEN, chat_id=USER_ID, message=notification_about_tweet)
telegram.notify(token=TOKEN, chat_id=USER_ID, message=tweet_text)
try:
entities = i.extended_entities
itr = entities['media']
for img_dict in range(len(itr)):
telegram.notify(token=TOKEN, chat_id=ADMIN_ID, message=(entities['media'][img_dict]['media_url_https']))
except:
entities = 0
telegram.notify(token=TOKEN, chat_id=ADMIN_ID, message=notification_about_tweet)
telegram.notify(token=TOKEN, chat_id=ADMIN_ID, message=tweet_text)
telegram.notify(token=TOKEN, chat_id=ADMIN_ID, message=notification_about_tweet_time)
telegram.notify(token=TOKEN, chat_id=ADMIN_ID, message=notification_about_current_time)
try:
entities = i.extended_entities
itr = entities['media']
for img_dict in range(len(itr)):
telegram.notify(token=TOKEN, chat_id=USER_ID,
message=(entities['media'][img_dict]['media_url_https']))
except:
entities = 0
seconds_left -= 60
time.sleep(60)
seconds_left -= 1.5
time.sleep(1.5)
listening_to_the_user()
Initially, I tried to use bearer_token for authentication, but this literally did not affect the operation of my program in any way and I simply reduced it to those tokens that are now in it.
I rummaged through the documentation in search of an answer to my question, but in search I only thought of the script calling the second bot after 7 minutes of the previous one, and so they worked in turn
I have read this question, which is similar and gets me most of the way.
The answer of the code isn't posted, but I believe I have followed the instructions and managed to get it working -- except after it's been opened.
It works perfectly fine immediately after recording, however I want to save the data and read it again for later use: literally every time I run the program and I don't want to have to re-record it every time.
import keyboard
import threading
from keyboard import KeyboardEvent
import time
import json
def record(file='record.txt'):
f = open(file, 'w+')
keyboard_events = []
keyboard.start_recording()
starttime = time.time()
keyboard.wait('esc')
keyboard_events = keyboard.stop_recording()
print(starttime, file=f)
for kevent in range(0, len(keyboard_events)):
print(keyboard_events[kevent].to_json(), file = f)
f.close()
def play(file="record.txt", speed = 1):
f = open(file, 'r')
lines = f.readlines()
f.close()
keyboard_events = []
for index in range(1,len(lines)):
keyboard_events.append(keyboard.KeyboardEvent(**json.loads(lines[index])))
starttime = float(lines[0])
keyboard_time_interval = keyboard_events[0].time - starttime
keyboard_time_interval /= speed
k_thread = threading.Thread(target = lambda : time.sleep(keyboard_time_interval) == keyboard.play(keyboard_events, speed_factor=speed) )
k_thread.start()
k_thread.join()
I am not especially new to coding, or the Python language, but this problem perplexes me. I've tested all the variables and none of them are being sustained outside of the record function.
(I don't fully understand lambda, Threading or **json.loads, but I don't think that's a problem.)
What's going on here?
For extra bonus points, if this is possible to do asynchronously, that'd be amazing. One problem at a time, though.
Just in case anyone else ever has the same problem as me, just tag this at the start of your code. No idea why it works, but it does.
keyboard.start_recording()
temp = keyboard.stop_recording()
You can forget about the temp variable immediately.
I have worked all the tutorials and searched for "load csv tensorflow" but just can't get the logic of it all. I'm not a total beginner, but I don't have much time to complete this, and I've been suddenly thrown into Tensorflow, which is unexpectedly difficult.
Let me lay it out:
Very simple CSV file of 184 columns that are all float numbers. A row is simply today's price, three buy signals, and the previous 180 days prices
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
This article: https://www.tensorflow.org/guide/datasets
It covers how to load pretty well. It even has a section on changing to numpy arrays, which is what I need to train and test the 'net. However, as the author says in the article leading to this Web page, it is pretty complex. It seems like everything is geared toward doing data manipulation, where we have already normalized our data (nothing has really changed in AI since 1983 in terms of inputs, outputs, and layers).
Here is a way to load it, but not in to Numpy and no example of not manipulating the data.
with tf.Session as sess:
sess.run( tf.global variables initializer())
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
line_count = 0
for row in csv_reader:
?????????
line_count += 1
I need to know how to get the csv file in to the
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
so that I can follow the tutorials to train and test the net.
It's not that clear for me your question. You might be answering, tell me if I'm wrong, how to feed data in your model? There are several fashions to do so.
Use placeholders with feed_dict during the session. This is the basic and easier one but often suffers from training performance issue. Further explanation, check this post.
Use queue. Hard to implement and badly documented, I don't suggest, because it's been taken over by the third method.
tf.data API.
...
So to answer your question by the first method:
# get your array outside the session
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
dataset = np.asarray([data for data in csv_reader])
close_col = dataset[:, 0]
signal_cols = dataset[:, 1: 3]
previous_cols = dataset[:, 3:]
# let's say you load 100 row each time for training
batch_size = 100
# define placeholders like you
...
with tf.Session() as sess:
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation, feed_dict={close: close_col[start: end, ],
signals: signal_col[start: end, ],
previous: previous_col[start: end, ]
}
)
By the third method:
# retrieve your columns like before
...
# let's say you load 100 row each time for training
batch_size = 100
# construct your input pipeline
c_col, s_col, p_col = wrapper(filename)
batch = tf.data.Dataset.from_tensor_slices((close_col, signal_col, previous_col))
batch = batch.shuffle(c_col.shape[0]).batch(batch_size) #mix data --> assemble batches --> prefetch to RAM and ready inject to model
iterator = batch.make_initializable_iterator()
iter_init_operation = iterator.initializer
c_it, s_it, p_it = iterator.get_next() #get next batch operation automatically called at each iteration within the session
# replace your close, signal, previous placeholder in your model by c_it, s_it, p_it when you define your model
...
with tf.Session() as sess:
# you need to initialize the iterators
sess.run([tf.global_variable_initializer, iter_init_operation])
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation)
Good luck!
It annoys me that the following query when fired up by an AJAX request takes 1 second to process where as when called during page refresh(synchronous) takes merely 2 ms. I have spent hours tracking down what goes wrong but I am helpless. I have tried Model->read, Model->find, Model->query() yet it takes the same amount of time. I think 1 second for a simple query like this is not natural. May be the CakePHP models wasting too much resources and time. But my instincts say it's related to query cache.
protected function _user_info($id= NULL){
//benchmarking
$time = -microtime(true);
if(!$id){
if($this->Auth->loggedIn())
$id = $this->Auth->user('id');
else
return NULL;
}
$this->loadModel('User');
/*$findOptions = array('conditions'=>array('User.id'=>$id),
'fields'=>'User.id, User.name, User.email, User.role, dp',
'limit'=>1,
'recursive'=>-1);
$r = $this->User->find('first', $findOptions);
*/
$r = $this->User->query("SELECT * FROM users WHERE id = '".$id."' LIMIT 1");
$time += microtime(true);
echo '<h1>'.$time.'</h1>'; //out- time taken for the query
return $r['User'];
}
Any kind of help would be awesome!
First, try normal Cake search style:
// You should have containable
$this->User->contain();
$r = $this->User->find('first',array('conditions'=>array('id'=>$id)));
Test it.
Cheers.
If you're on debug 2 you're not counting execution times but debugging overhead also.
With debug enabled cache won't be used for long which will mean the DB will be asked to DESCRIBE the table, an sql log will be created, expensive object reflection might be requested multiple times especially if you hit warnings, exceptions or non fatal errors and all this will take considerably longer.
I have a function in matlab with something like this:
function [ out ] = myFunc(arg1, arg2)
times = [];
for i = 1:arg1
tic
% do some long calculations
times = [times; toc];
end
% Return
out = times;
end
I want to abort the running function now but keep the values of times which are currently already taken. How to do it? When I press strg+c, I simply loose it because it's only a local function variable which is deleted when the function leaves the scope...
Thanks!
Simplest solution would be to turn it from a function to a script, where times would no longer be a local variable.
The more elegant solution would be to save the times variable to a .mat file within the loop. Depending on the time per iteration, you could do this on every loop, or once every ten loops, etc.
Couldn't you use persistent variables to solve your problem, e.g.
function [ out ] = myFunc(arg1, arg2)
persistent times
if nargin == 0
out = times;
return;
end;
times = [];
for i = 1:arg1
tic
% do some long calculations
times = [times; toc];
end
% Return
out = times;
end
I'm not sure whether persistent variables are cleared upon Ctrl-C, but I don't think it should be the case. What this should do: if you supply arguments, it will run as before. When you omit all arguments however, the last value of times should be returned.
onCleanup functions still fire in the presence of CTRL-C, however I don't think that's really going to help because it will be hard for you to connect the value you want to the onCleanup function handle (there are some tricky variable lifetime issues here). You may have more luck using a MATLAB handle object to track your value. For example
x = containers.Map(); x('Value') = [];
myFcn(x); % updates x('Value')
% CTRL-C
x('Value') % contains latest value
Another possible solution is to use the assignin function to send the data to your workspace on each iteration. e.g.
function [ out ] = myFunc(arg1, arg2)
times = [];
for i = 1:arg1
tic
% do some long calculations
times = [times; toc];
% copy the variable to the base workspace
assignin('base', 'thelasttimes', times)
end
% Return
out = times;
end