I am having a pretrained model in json format but i want to use it in python app, is there a way of converting this model to keras h5 Model format?
As per https://www.tensorflow.org/api_docs/python/tf/keras/models/, it should look something like:
json_file_path = "/path/to/json"
with open(json_file_path, "r") as json_file:
json_savedModel= json_file.read()
model_json = tf.keras.models.model_from_json(json_savedModel)
model_json.save("model_name.h5", save_format="hf")
Edit: And then to load the h5 model:
h5_file = "/path/to/model_name.h5"
model_h5 = tf.keras.models.load_model(h5_file)
Related
I have a json file in the form
{"total_rows":1000,"rows":[{data},{data},{data}]}
and I just want
[{data},{data},{data}]
I know pandas has desired output to dataframe like:
import pandas as pd
file_reading = json.loads(open(url).read())
df = pd.DataFrame.from_dict(file_reading['rows'])
print(df)
But I am hoping for a way to do this outputting to json array and its a big dataset so I dont want to loop
You opened a file without closing it. There's nothing fancy needed, the JSON just translate into a dictionary in Python:
with open(url) as fp:
file_reading = json.load(fp)
df = pd.DataFrame(file_reading["rows"])
I want to scrape data at the county level from https://apidocs.covidactnow.org
However I could only get a dataframe with one line for each county, and data for each date is stored within a dictionary in each row/county. I would like to access this data and store it in long format (= have one row per county-date).
import requests
import pandas as pd
import os
if __name__ == '__main__':
os.chdir('/home/username/Desktop/')
url = 'https://api.covidactnow.org/v2/counties.timeseries.json?apiKey=ENTER_YOUR_KEY'
response = requests.get(url).json()
data = pd.DataFrame(response)
This seems like a trivial question, but I've tried for hours. What would be the best way to achieve that ?
Do you mean something like that?
import requests
url = 'https://api.covidactnow.org/v2/states.timeseries.csv?apiKey=YOURAPIKEY'
response = requests.get(url)
csv_response = (response.text)
# Then you can transform STRING to CSV
Check this fo string to CSV --> python parsing string to csv format
After training an xgboost model, I would like to save it and some other custom fields as a json object as below. The purpose being so I can load the json object later, use the model object to make predictions as well as inspecting the other custom fields.
model = xgb.train(params=tree_params, dtrain=data)
my_model_dict = {
"model": ...<json serializable model object>..., # need help here
"features": model.feature_names,
"tree_params": tree_params,
...
}
json.dumps(my_model_dict, file_path)
my_model_dict = json.load(file_path)
model = my_model_dict["model"]
predictions = model.predict(new_data)
Is it possible to convert an xgboost model object into an object that is json serializable and that can then be loaded to make standard xgboost predictions?
I appreciate I can save the raw model seperately as a json using
model.save_model("my_model.json")
model = xgb.Booster("my_model.json")
model.predict(new_data)
but really what I would like to do is create a dictionary containing the model along with other custom fields that can be saved as a json, then loaded to make predictions.
Suppose Some JSON file is # www.github.com/xyz/Hello.json, I want to read this content of this JSON in JSON object in Groovy
You then need:
import groovy.json.JsonSlurper
def slurped = new JsonSlurper().parse('www.github.com/xyz/Hello.json'.toURL())
Here you can find more info.
I want to save the encoded part just before decoder stage in autoencoder model of Keras with Tensorflow backend.
For instance;
encoding_dim = 210
input_img = Input(shape=(5184,))
encoded = Dense(2592, activation='relu')(input_img)
encoded1 = Dense(encoding_dim, activation='relu')(encoded)
decoded = Dense(encoding_dim, activation='relu')(encoded1)
decoded = Dense(5184, activation='sigmoid')(decoded)
I want to save the encoded1 as csv file after the autoencoder training. Suppose the output of Dense will be (nb_samples, output_dim).
Thank you
Try:
autoencoder = Model(input_img, decoded)
encoder = Model(input_img, encoded1)
autoencoder.compile(loss=my_loss, optimizer=my_optimizer)
autoencoder.fit(my_data, my_data, .. rest of fit params)
numpy.savetxt("encoded1.csv", encoder.predict(x), delimiter=",")
Moreover - I don't know what kind of data do you have but I advise you to use linear activation is a last layer and mse loss function.