Context:
I want to build an API using FastAPI and SQLModel
I need to use database reflection to create the SQLModel table models based on existing database
I couldnt find how to do database reflection directly in SQLModel
So I do database reflection in SQLAlchemy and now i want turn the SQLAlchemy table models into SQLModel ones to easily utilize them with FastAPI
Problem:
I cant figure out how to create an SQLModel table model based on the SQLAlchemy table model which i created with database reflection. as shown in the code below.
The docs of SQLModel suggest easy integration of SQLAlchemy so i figured it should be easy...
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.ext.declarative import declarative_base
from fastapi import FastAPI
from sqlmodel import SQLModel
app = FastAPI()
# do database reflection
engine = create_engine('sqlite:///database.db')
Base = declarative_base()
Base.metadata.reflect(engine)
print("alchemy meta tables:\n ", Base.metadata.tables)
# creates SQLAlchemy model based on database table schema of table 'hero'
class HeroDBReflection(Base):
__table__ = Base.metadata.tables['hero']
# TODO: How to create this Hero SQlModel data model based on the SQLAlchemy model 'HeroDBReflection'?
# class Hero(SQLModel, table=True):
# metadata = Base.metadata.tables['hero'] ?
# #app.post("/heroes/", response_model=Hero)
# def create_hero(hero: Hero):
# session.add(hero)
# session.commit
# session.refresh(hero)
# return hero
*This is my first Stackoverflow post, so I hope that my question is clear :)
Thanks a lot!
Related
dataset = ds.dataset("abfs://test", format="parquet", partitioning="hive", filesystem=fs)
I can read datasets with the pyarrow dataset feature, but how can I write to a dataset with a different schema?
I seem to be able to import DirectoryPartitioning, for example, but I cannot figure out a way to save the data to create a schema like this:
from pyarrow.dataset import DirectoryPartitioning
partitioning = DirectoryPartitioning(pa.schema([("year", pa.int16()), ("month", pa.int8()), ("day", pa.int8())]))
print(partitioning.parse("/2009/11/3"))
Will we continue to use write_to_dataset to write Parquet files, or will there be a new method specific to the datasets class?
I wouldn't like to build a Geomesa Datastore, just want to use the Geomesa Spark Core/SQL module to do some spatial analysis process on spark. My data sources are some GeoJson files on hdfs. However, I have to create a SpatialRDD by SpatialRDDProvider.
There is a Converter RDD Provider example in the documents of Geomesa:
import com.typesafe.config.ConfigFactory
import org.apache.hadoop.conf.Configuration
import org.geotools.data.Query
import org.locationtech.geomesa.spark.GeoMesaSpark
val exampleConf = ConfigFactory.load("example.conf").root().render()
val params = Map(
"geomesa.converter" -> exampleConf,
"geomesa.converter.inputs" -> "example.csv",
"geomesa.sft" -> "phrase:String,dtg:Date,geom:Point:srid=4326",
"geomesa.sft.name" -> "example")
val query = new Query("example")
val rdd = GeoMesaSpark(params).rdd(new Configuration(), sc, params, query)
I can choose GeoMesa's JSON Converter to create the SpatialRDD. However, it seems to be
necessary to assign all field names and types in geomesa.sft paramater and a converter config file. If I have many GeoJson files, I have to do this one by one manually, it is very
inconvenient obviously.
Is there any way that Geomesa Converter can infer the field names and types from the file?
Yes, GeoMesa can infer the type and converter. For scala/java, see this unit test. Alternatively, the GeoMesa CLI tools can be used ahead of time to persist the type and converter to reusable files, using the convert command (type inference is described in the linked ingest command).
I've used JSONfield in my serilizer and as the user in the thread store json as dict points out, DRF with Mysql stores JSONfield as a dict
However I would rather store it as JSON {"tags":{"data":"test"}} instead of the default behavior of storing it as Dict - {'tags': { 'data': 'test'}} - Daniel suggests using over riding the JSONfield as:
class JSONField(serializers.Field):
def to_representation(self, obj):
return json.loads(obj)
......
However the data is still stored as dict.
In my serializers.py
I've the overridden the JSONField class and then used it as such
class schemaserializer(serializers.ModelSerializer):
js_data=serializers.JSONField()
However it still saves it as as a dict.
Expected behavior:
Save it as JSON - POST
retrieve it as dict - GET (so that the render can parse it as JSON)
Am currently doing this manually using json dumps and json loads, but looking for a better way.
The reason to do this is because although I have an API there are instances where users read my DB directly and they need the field to be in a JSON.
Django (2.0.1)
Python 3.5
djangorestframework (3.7.7)
Serializers allow complex data such as querysets and model instances to be converted to native Python datatypes that can then be easily rendered into JSON, XML or other content types.
See more at serializers docs
What you need is:
class SchemaSerializer(serializers.ModelSerializer):
class Meta:
model = YOUR_MODEL_NAME
fields = A_LIST_OF_FIELD
and then in your view:
class SchemaView(mixins.ListModelMixin, generic.GenericAPIView):
queryset = YOUR_MODEL_NAME.objects.all()
serializer_class = SchemaSerializer
Do you mean you want to use the actual JSON type in your database backend? If so, you would want to use the appropriate JSONfield type on your model instead of in the serializer specifically.
Is there any method to import data from mysql to elasticsearch,batch by batch?If yes,how to do it??
Because the bulk import seems to be a problem.When i import 191000 items,only a few are being imported.
I'm trying to export a pandas dataframe to JSON with no luck. I've tried:
all_data.to_json("spdata.json") and all_data.to_json()
I get the same attribute error on both: 'DataFrame' object has no attribute 'to_json'. Just to make sure something isn't wrong with the DataFrame, i tested writing it to_csv and that worked.
Is there something i'm missing in my syntax or package i need to import? I am running Python version 2.7.5 which is part of an Enthought Canopy Express package. Imports at the beginning of my code are:
from pandas import Series, DataFrame
import pandas as pd
import numpy as np
from sys import argv
from datetime import datetime, timedelta
from dateutil.parser import parse
The to_json method was introduced to 0.12, so you'll need to upgrade your pandas to be able to use it.