How to preprocess the new dataset while model deployement using flask - json

data = dataset.iloc[:,:-1].values
label = dataset.iloc[:,-1].values
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
for i in range(0,5):
data[:,i] = labelencoder.fit_transform(data[:,i])
data1=pd.DataFrame(data[:,:5])
for i in range(7,12):
data[:,i]=labelencoder.fit_transform(data[:,i])
data2=pd.DataFrame(data[:,7:12])
from sklearn.preprocessing import Normalizer
#----Normalizing Uncategorical Data----#
data3=dataset.iloc[:,[5,6,12]]
dataset.iloc[:,:5]
normalized_data = Normalizer().fit_transform(data3)
data3=pd.DataFrame(normalized_data)
data_full=pd.concat([data1,data2,data3],axis=1)
label=labelencoder.fit_transform(label)
label=pd.DataFrame(label)
Above are my preprocessing steps...the same i want to do to new input data after model deployment through web app.
How to write a function for this..?
i am using flask for developing apis
What to write under predict fund...? in app.py
#app.route('/predict' methods= 'POST' )
def predict():

You will have to pickle all the transformers that you are using while pre-processing your data. Then you will have to load the same transformers and use them during predictions.
Creating a new transformer and fitting it on different value will give your weird predictions.
I created a demo flask project for a meetup. It has all the code that you need.
Deployment: https://github.com/Ankur-singh/flask_demo/blob/master/final_ml_flask.py
Training: https://github.com/Ankur-singh/flask_demo/blob/master/iris.py

Related

How to load images from URL using Pytorch

I want to load the images using Pytorch
I have a dataset of image_urls with its corresponding labels(offer_id are labels.)
Is there any efficient way of doing it in Pytorch?.
This should work if image url is a public URL using pillow, requests and torch together
from PIL import Image
import requests
from torchvision.io import read_image
import torchvision.transforms as transforms
url = "https://example.jpg"
image = Image.open(requests.get(url, stream=True).raw)
transform = transforms.Compose([
transforms.PILToTensor()])
torch_image = transform(image)
You can use the requests package:
import requests
from PIL import Image
import io
import cv2
response = requests.get(df1.URL[0]).content
im = Image.open(io.BytesIO(response))
You may convert your image URLs to files first by downloading them to specific folder representing the label. You will certainly find a way to do so. Then you may do like this to check what you have:
%%time
import glob
f=glob.glob('/content/imgs/**/*.png')
print(len(f), f)
There is a need to create a image loader that will read the image from disk. In here the pil_loader:
def pil_loader(path):
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
ds = torchvision.datasets.DatasetFolder('/content/imgs',
loader=pil_loader,
extensions=('.png'),
transform=t)
print(ds)
You may check how I did that for Cifar10.
Check the section "From PNGs to dataset".

Using flask_excel within Blueprints

I have an application that is using Blueprints and I need to be able to generate csv files for download. I found the flask_excel package which works great, but none of the examples I've found use Blueprints. Here is an example application that works:
#app.py
from flask import Flask
import flask_excel as excel
app=Flask(__name__)
excel.init_excel(app)
#app.route("/example", methods=['GET'])
def example():
return excel.make_response_from_array([[1,2], [3, 4]], "csv", file_name="example_data")
if __name__ == "__main__":
app.run()
However, I need something structured like this:
#__init__.py
from flask import Flask
import flask_excel as excel
from download_csv import download_csv_bp
def create_app():
app = Flask(__name__)
app.register_blueprint(download_csv_bp)
excel.init_excel(app)
return app
app = create_app()
and
#download_csv.py
from flask import Flask, Blueprint
download_csv_bp=Blueprint('download_csv',__name__)
#download_csv_bp.route("/example", methods=['GET'])
def example():
return excel.make_response_from_array([[1,2], [3, 4]], "csv", file_name="example_data")
How do I import the flask_excel functions that are initialized in __init__.py? If I put import flask_excel as excel in my download_csv.py file, the response generated just prints the resulting text to the screen.
The solution was simpler than I thought. If I add the line from app import excel to download_csv.py then it works. My new issue is with ajax calling this route, but as this is unrelated to my original question, I won't ask it here.

Import csv file in drf

I'm trying to create a view to import a csv using drf and django-import-export.
My example (I'm doing baby steps and debugging to learn):
class ImportMyExampleView(APIView):
parser_classes = (FileUploadParser, )
def post(self, request, filename, format=None):
person_resource = PersonResource()
dataset = Dataset()
new_persons = request.data['file']
imported_data = dataset.load(new_persons.read())
return Response("Ok - Babysteps")
But I get this error (using postman):
Tablib has no format 'None' or it is not registered.
Changing to imported_data = Dataset().load(new_persons.read().decode(), format='csv', headers=False) I get this new error:
InvalidDimensions at /v1/myupload/test_import.csv
No exception message supplied
Does anyone have any tips or can indicate a reference? I'm following this site, but I'm having to "translate" to drf.
Starting with baby steps is a great idea. I would suggest get a standalone script working first so that you can check the file can be read and imported.
If you can set breakpoints and step into the django-import-export source, this will save you a lot of time in understanding what's going on.
A sample test function (based on the example app):
def test_import():
with open('./books-sample.csv', 'r') as fh:
dataset = Dataset().load(fh)
book_resource = BookResource()
result = book_resource.import_data(dataset, raise_errors=True)
print(result.totals)
You can adapt this so that you import your own data. Once this works OK then you can integrate it with your post() function.
I recommend getting the example app running because it will demonstrate how imports work.
InvalidDimensions means that the dataset you're trying to load doesn't match the format expected by Dataset. Try removing the headers=False arg or explicitly declare the headers (headers=['h1', 'h2', 'h3'] - swap in the correct names for your headers).

How to save a django model into json file

I would like to save a Django model into a json file into a specific directory. How can I do that? My code is below in views.py.
def JSON(request):
with open(r'C:\Users\Savas\Desktop\DOCUMENTS\file.json', "w") as out:
mast_point = serializers.serialize("json", Poi.objects.all())
out.write(mast_point)
Good morning!
did you look at the documentation for this? :)
Additionally i would suggest that you indent you code like the following:
from django.core import serializers
from .models import yourmodel
from django.http import HttpResponse
def example(request):
objects = yourmodel.objects.all()
with open(r'...your path...\file.json', "w") as out:
mast_point = serializers.serialize("json", objects)
out.write(mast_point)
template = loader.get_template('some_template.html')
context = {'object': objects}
return HttpResponse(template.render(context, request))
I just tried this code snippet and it worked in my sample django application, of course you have to adapt this snippet a little bit :)
As of version 1.1 and greater, the Django dumpdata management command allows you to dump data from individual tables:
./manage.py dumpdata myapp2.my_model > fixtures/file_name.json

Create neo4j graph database from python using html forms

Hi i'm very new to neo4j i need to know how to create nodes and properties of graph using html forms by using py2neo and neo4j and how to add auto id's to the nodes
from flask import Flask,render_template,request,url_for,json,jsonify
from py2neo import neo4j,Graph,Node,Relationship,cypher
from neo4jrestclient.client import GraphDatabase
app = Flask(__name__)
gdb = GraphDatabase("http://neo4j:duke#localhost:7474/db/data")
graph=Graph("http://neo4j:duke#localhost:7474/db/data")
#app.route('/')
def index():
results = graph.cypher.execute("MATCH (n:Person) RETURN n")
'''print "gyktdjxdhgfcvkjbljkfr",result'''
return results.json
#app.route('/hello')
def create():
return "f"
if __name__ == '__main__':
app.run()
Check out this blog post by Nicole for some insight:
http://neo4j.com/blog/building-python-web-application-using-flask-neo4j/
code is on github:
https://github.com/nicolewhite/neo4j-flask
You don't need auto-incrementing id's like in a relational database.
Just use the person's login for that and use MERGE
See. http://neo4j.com/developer/cypher