I am trying to use federated learning framework flower with TensorFlow. My code seems to compile fine but It's not showing federated loss and accuracy. What am I doing wrong?
ServerSide Code :
import flwr as fl
import sys
import numpy as np
class SaveModelStrategy(fl.server.strategy.FedAvg):
def aggregate_fit(
self,
rnd,
results,
failures
):
aggregated_weights = super().aggregate_fit(rnd, results, failures)
"""if aggregated_weights is not None:
# Save aggregated_weights
print(f"Saving round {rnd} aggregated_weights...")
np.savez(f"round-{rnd}-weights.npz", *aggregated_weights)"""
return aggregated_weights
# Create strategy and run server
strategy = SaveModelStrategy()
# Start Flower server for three rounds of federated learning
fl.server.start_server(
server_address = 'localhost:'+str(sys.argv[1]),
#server_address = "[::]:8080" ,
config={"num_rounds": 2} ,
grpc_max_message_length = 1024*1024*1024,
strategy = strategy
)
Server Side:
According to the source code of app.py, I realized that we can set force_final_distributed_eval = True. So we need to pass this to fl.server.start_server().
Not sure whether this is intended, but it solved my problem.
Related
I have created a class in which am loading my custom weights for yolo v5, v7, and v8 to get detection out of this class I am getting xyxy values and class name. now I want to deploy deep SORT on it by using this specific class with no extra files. in my detection class I am simply using torch.hub.load() and access to my basic functions in yolo v5, v7, and v8. now I am looking for specific and straightforward techniques to apply deep sort by using my detection class. is it possible? if it is possible please tell me how can I do it. if it is not possible, what is the simplest method to implement deep sort?
import torch
import cv2
class YoloDetector:
def __init__(self, conf_thold, device, weights_path, expected_objs):
# taking the image file path, confidence level and GPU or CPU selection
self._model = torch.hub.load("WongKinYiu/yolov7", "custom", f"{weights_path}", trust_repo=True)
self._model.conf = conf_thold # NMS confidence threshold
self._model.classes = expected_objs # (optional list) filter by class
self._model.to(device) # specifying device type
def process_image(self, image):
results = self._model(image)
predictions = [] # final list to return all detections
detection = results.pandas().xyxy[0]
for i in range(len(detection)):
# getting bbox and class name one by one
class_name = detection["name"]
xmin = detection["xmin"][i]
ymin = detection["ymin"][i]
xmax = detection["xmax"][i]
ymax = detection["ymax"][i]
# parallely appending the values in list using dictionary
predictions.append({'class': class_name[i], 'bbox': [int(xmin), int(ymin), int(xmax), int(ymax)]})
return predictions
that is the code I am using for getting detections now please tell me how can I implement deep sort by using this.
I have a simple FastApi endpoint that connects to a MySQL database using SqlAlchemy (based of the tutorial: https://fastapi.tiangolo.com/tutorial/sql-databases/)
I create a session using:
engine = create_engine(
SQLALCHEMY_DATABASE_URL
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
I create the dependency:
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
In my route I want to execute an arbitrary SQL statement but I am not sure how to handle session, connection, cursor etc. correctly (including closing) which I learned the hard way is super important for correct performance
#app.get("/get_data")
def get_data(db: Session = Depends(get_db)):
???
Ultimately the reason for this is that my table contains machine learning features with columns that are undetermined beforehand. If there is a way to define a Base model with "all columns" that would work too, but I couldnt find that either.
I solved this using the https://www.encode.io/databases/ package instead. It handles all connections / sessions etc. under the hood. Simplified snippet:
database = databases.Database(DATABASE_URL)
#app.get("/read_db")
async def read_db():
data = await database.fetch_all("SELECT * FROM USER_TABLE")
return data
import pymysql, pandas as pd
engine = create_engine('mysql+pymysql://'+uname+':'+password+'#'+server+':'+port+'/'+db)
con = engine.connect()
df = pd.read_sql('SELECT schema_name FROM information_schema.schemata', con)
return df
I am building up a sequential model by Keras with a custom activation function by defining a new class written by keras' tf backend and some tf's tensor operators themselves. I put the custom activation function in ../keras/advanced_activation.py.
I intend to run it using float16 precision. Without the custom function, I could use the following to choose between float32 and float16 easily:
if self.precision == 'float16':
K.set_floatx('float16')
K.set_epsilon(1e-4)
else:
K.set_floatx('float32')
K.set_epsilon(1e-7)
Yet when involving the custom function into my model, it seems tf persists in float32 even when I chose float16. I understand that tf is run under flat32 by default, so my questions are:
There are also several built-in activation functions in the same file, how does Keras make them run under float16 so that I might be able to do the same? There is a tf method tf.dtypes.cast(...), can I use it in my custom function to force tf? There is no such a cast in those build-in functions.
Alternatively, how can I force tf to run under float16 directly by using Keras with tf as the backend?
Many thanks.
I got the answer by debugging. The lesson is that
First, tf.dtypes.cast(...) works.
Second, I can specify a second argument into my custom activation function to indicate the data type of the cast(...). The following is the associated code
Third, we do not need tf.constant to indicate the data type of those constants
Fourth, I conclude that adding a custom function in custom_activation.py is the easiest way to define our own layer/activation, as long as it is differentiable everywhere, or at least piece-wisely differentiable and has no discontinuity at junctures.
# Quadruple Piece-Wise Constant Function
class MyFunc(Layer):
def __init__(self, sharp=100, DataType = 'float32', **kwargs):
super(MyFunc, self).__init__(**kwargs)
self.supports_masking = True
self.sharp = K.cast_to_floatx(sharp)
self.DataType = DataType
def call(self, inputs):
inputss = tf.dtypes.cast(inputs, dtype=self.DataType)
orig = inputss
# some calculations
return # my_results
def get_config(self):
config = {'sharp': float(self.sharp),
'DataType': self.DataType}
base_config = super(MyFunc, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def compute_output_shape(self, input_shape):
return input_shape
Thanks #y.selivonchyk for your worthy discussion with me, and #Yolo Swaggins for your contribution.
I need to stress-test a system and http://locust.io seems like the best way to go about this. However, it looks like it is set up to use the same user every time. I need each spawn to log in as a different user. How do I go about setting that up? Alternatively, is there another system that would be good to use?
Locust author here.
By default, each HttpLocust user instance has an HTTP client that has it's own separate session.
Locust doesn't have any feature for providing a list of user credentials or similar. However, your load testing scripts are just python code, and luckily it's trivial to implement this yourself.
Here's a short example:
# locustfile.py
from locust import HttpLocust, TaskSet, task
USER_CREDENTIALS = [
("user1", "password"),
("user2", "password"),
("user3", "password"),
]
class UserBehaviour(TaskSet):
def on_start(self):
if len(USER_CREDENTIALS) > 0:
user, passw = USER_CREDENTIALS.pop()
self.client.post("/login", {"username":user, "password":passw})
#task
def some_task(self):
# user should be logged in here (unless the USER_CREDENTIALS ran out)
self.client.get("/protected/resource")
class User(HttpLocust):
task_set = UserBehaviour
min_wait = 5000
max_wait = 60000
The above code wouldn't work when running Locust distributed, since the same code runs on each slave node, and they don't share any state. Therefore you would have to introduce some external datastore which the slave nodes could use to share states (e.g. PostgreSQL, redis, memcached or something else).
Alternatively, you can create users.py module to hold the users' information you need in your test cases, in my example, it holds email and cookies. Then you can call them randomly in your tasks. See below:
# locustfile.py
from locust import HttpLocust, TaskSet, task
from user_agent import *
from users import users_info
class UserBehaviour(TaskSet):
def get_user(self):
user = random.choice(users_info)
return user
#task(10)
def get_siparislerim(self):
user = self.get_user()
user_agent = self.get_user_agent()
r = self.client.get("/orders", headers = {"Cookie": user[1], 'User-Agent': user_agent})
class User(HttpLocust):
task_set = UserBehaviour
min_wait = 5000
max_wait = 60000
User and user-agent can be called by a function. With this way, we could distribute the test with many users and different user-agents.
# users.py
users_info = [
['performancetest.1441926507#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926506#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926501#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926499#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926494#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926493#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926492#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926491#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926490#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926489#gmail.com', 'cookies_created_by_each_user'],
['performancetest.1441926487#gmail.com', 'cookies_created_by_each_user']]
I took a slightly different approach when implementing this for a distributed system. I utilized a very simple flask server that I made a get call to during the on_start portion of the TaskSet.
from flask import Flask, jsonify
app = Flask(__name__)
count = 0 #Shared Variable
#app.route("/")
def counter():
global count
count = count+1
tenant = count // 5 + 1
user = count % 5 + 1
return jsonify({'count':count,'tenant':"load_tenant_{}".format(str(tenant)),'admin':"admin",'user':"load_user_{}".format(str(user))})
if __name__ == "__main__":
app.run()
In this way I now have an endpoint I can get at http://localhost:5000/ on whatever host I run this. I just need to make this endpoint accessible to the slave systems and I wont have to worry about duplicate users or some type of round robin effect caused by having a limited set of user_info.
Piggy-backing on #heyman's answer here. The example code will work, but continuing to start / stop tests will eventually clear out the USER_CREDENTIALS list, and start to throw errors.
I ended up adding the following:
from locust import events # in addition to the other locust modules needed
def hatch_complete_handler(**kw):
global USER_CREDENTIALS
USER_CREDENTIALS = generate_users() # some function here to regenerate your list
events.hatch_complete += hatch_complete_handler
This refreshes your user list once your swarm finishes hatching.
Also keep in mind that you'll need a list longer than the number of users you wish to spawn.
Trying to port some SqlAlchemy to Django and I've got this tricky little bit:
version = Column(
BIGINT,
default=literal_column(
'UNIX_TIMESTAMP() * 1000000 + MICROSECOND(CURRENT_TIMESTAMP)'
),
nullable=False)
What's the best option for porting the literal_column bit to Django? Best idea I've got so far is a function to set as the default that executes the same raw sql, but I'm not sure if there's an easier way? My google-foo is failing me there.
Edit: the reason we need to use a timestamp created by mysql is that we are measuring how out of date something is (so we need to actually know time) and we want, for correctness, to have only one time-stamping authority (so that we don't introduce error using python functions that look at system times, which could be different across servers).
At present I've got:
def get_current_timestamp(self):
cursor = connection.cursor()
cursor.execute("SELECT UNIX_TIMESTAMP() * 1000000 + MICROSECOND(CURRENT_TIMESTAMP)")
row = cursor.fetchone()
return row
version = models.BigIntegerField(default=get_current_timestamp)
which, at this point, sounds like my best/only option.
If you don't care about having a central time authority:
import time
version = models.BigIntegerField(
default = lambda: int(time.time()*1000000) )
To bend the database to your will:
from django.db.models.expressions import ExpressionNode
class NowInt(ExpressionNode):
""" Pass this in the same manner you would pass Count or F objects """
def __init__(self):
super(Now, self).__init__(None, None, False)
def evaluate(self, evaluator, qn, connection):
return '(UNIX_TIMESTAMP() * 1000000 + MICROSECOND(CURRENT_TIMESTAMP))', []
### Model
version = models.BigIntegerField(default=NowInt())
because expression nodes are not callables, the expression will be evaluated database side.