I have a dash dashbboard
I use a callback to retrieve my data as a table
#app.callback(
Output('kpi_id', 'children'),
[Input(component_id='my_button')]
)
def create_kpi(value):
df= df.loc[df['user_role'].isin(value)]
df_kpi_users = kpi(df)
data = df_kpi_users.to_dict("records")
return data
I use a list of names to declare the names of my columns
TABLE_COLUMNS = ['day', 'DAU', 'DAU_7']
Inside the html declaration I have:
dash_table.DataTable(
id='kpi_id',
columns=[{"name": i, "id": i} for i in TABLE_COLUMNS],
My problem is: if I need to call the new table as a df to apply a certain conditional formatting, like the data_bars, all the examples I find is to call the df like this:
style_data_conditional=(
data_bars(df, 'DAU')
)
But I don't have a df declared, so how can I call my table?
Related
I have a DB collection consisting of nested strings . I am trying to convert the contents under "status" column as separate columns against each order ID in order to track the time taken from "order confirmed" to "pick up confirmed". The string looks as follows:
I have tried the same using
xyz_db= db.logisticsOrders -------------------------(DB collection)
df =pd.DataFrame(list(xyz_db.find())) ------------(JSON to dataframe)
Using normalize :
parse1=pd.json_normalize(df['status'])
It works fine in case of non nested arrays. But status being a nested array the output is as follows:
Using for :
data = df[['orderid','status']]
data = list(data['status'])
dfy = pd.DataFrame(columns = ['statuscode','statusname','laststatusupdatedon'])
for i in range(0, len(data)):
result = data[i]
dfy.loc[i] = [data[i][0],data[i][0],data[i][0],data[i][0]]
It gives the result in form of appended rows which is not the format i am trying to achieve
The output I am trying to get is :
Please help out!!
i share you which i used json read, maybe help you:
you can use two and more list
def jsonify(z):
genr = []
if z==z and z is not None:
z = eval(z)
if type(z) in (dict, list, tuple):
for dic in z:
for key, val in dic.items():
if key == "name":
genr.append(val)
else:
return None
else:
return None
return genr
top_genr['genres_N']=top_genr['genres'].apply(jsonify)
I have a list of 120 tables and i want to save sample size of first 1000 and last 1000 rows from each table into individual csv files for each table.
How can this be done in code repo or code authoring.
The following code allows to save one table to csv, can anyone help with this to loop through list of tables from a project folder and create individual csv files for each table?
#transform(
my_input = Input('/path/to/input/dataset'),
my_output = Output('/path/to/output/dataset')
)
def compute_function(my_input, my_output):
my_output.write_dataframe(
my_input.dataframe(),
output_format = "csv",
options = {
"compression": "gzip"
}
)
psuedo code
list_of_tables = [table1,table2,table3,...table120]
for tables in list_of_tables:
table = table.limit(1000)
table.write_dataframe(table.dataframe(),output_format = "csv",
options = {
"compression": "gzip"
})
i was able to get it working for one table, how can i just loop through a list of tables and generate it ?
The code for one table
# to get the first and last rows
from transforms.api import transform_df, Input, Output
from pyspark.sql.functions import monotonically_increasing_id
from pyspark.sql.functions import col
table_name = 'stock'
#transform_df(
output=Output(f"foundry/sample/{table_name}_sample"),
my_input=Input(f"foundry/input/{table_name}"),
)
def compute_first_last_1000(my_input):
first_stock_df = my_input.withColumn("index", monotonically_increasing_id())
first_stock_df = first_stock_df.orderBy("index").filter(col("index") < 1000).drop("index")
last_stock_df = my_input.withColumn("index", monotonically_increasing_id())
last_stock_df = last_stock_df.orderBy("index").filter(col("index") < 1000).drop("index")
stock_df = first_stock_df.unionByName(last_stock_df)
return stock_df
# code to save as csv file
table_name = 'stock'
#transform(
output=Output(f"foundry/sample/{table_name}_sample_csv"),
my_input=Input(f"foundry/sample/{table_name}_sample"),
)
def my_compute_function(my_input, output):
df = my_input.dataframe()
with output.filesystem().open('stock.csv', 'w') as stream:
csv_writer = csv.writer(stream)
csv_writer.writerow(df.schema.names)
csv_writer.writerows(df.collect())
Your best strategy here would be to programatically generate your transforms, you can also do a multi output transform if you don't fancy creating 1000 transforms. Something like this (written live into the answer box, non tested code some sintax may be wrong):
# you can generate this programatically
my_inputs = [
'/path/to/input/dataset1',
'/path/to/input/dataset2',
'/path/to/input/dataset3',
# ...
]
for table_path in my_inputs:
#transform_df(
Output(table_path + '_out'),
df=Input(table_path))
def transform(df):
# your logic here
return df
If you need to read the table names rather than hard coding them, then you could use the os.listdir or the os.walk method.
I think the previous answer left out the part about exporting only the first and last N rows. If the table is converted to a dataframe df, then
dfoutput = df.head(N).append(df.tail(N)])
or
dfoutput = df[:N].append(df[-N:])
I'm trying to create multiple DataFrames from the two lists below,
val paths = ListBuffer("s3://abc_xyz_tableA.json",
"s3://def_xyz_tableA.json",
"s3://abc_xyz_tableB.json",
"s3://def_xyz_tableB.json",
"s3://abc_xyz_tableC.json",....)
val tableNames = ListBuffer("tableA","tableB","tableC","tableD",....)
I want to create different dataframes using the table names by bringing all the common table name ending s3 paths together as they have the unique schema.
so for example if the tables and paths related to it are brought together then -
"tableADF" will have all the data from these paths "s3://abc_xyz_tableA.json", "s3://def_xyz_tableA.json" as they have "tableA" in the path
"tableBDF" will have all the data from these paths "s3://abc_xyz_tableB.json", "s3://def_xyz_tableB.json" as they have "tableB" in the path
and so on there can be many tableNames and Paths
I'm trying different approaches but not successful yet.
Any leads in achieving the desired solution will be of great help. Thanks!
using input_file_name() udf, you can filter based on the file names to get the dataframe for each file/file patterns
import org.apache.spark.sql.functions._
import spark.implicits._
var df = spark.read.format("json").load("s3://data/*.json")
df = df.withColumn(
"input_file", input_file_name()
)
val tableADF= df.filter($"input_file".endsWith("tableA.json"))
val tableBDF= df.filter($"input_file".endsWith("tableB.json"))
If the file post fix name list is pretty long then you an use something as below,
Also find the code explanation inline
import org.apache.spark.sql.functions._
object DFByFileName {
def main(args: Array[String]): Unit = {
val spark = Constant.getSparkSess
import spark.implicits._
//Load your JSON data
var df = spark.read.format("json").load("s3://data/*.json")
//Add a column with file name
df = df.withColumn(
"input_file", (input_file_name())
)
//Extract unique file postfix from the file names in a List
val fileGroupList = df.select("input_file").map(row => {
val fileName = row.getString(0)
val index1 = fileName.lastIndexOf("_")
val index2 = fileName.lastIndexOf(".")
fileName.substring(index1 + 1, index2)
}).collect()
//Iterate file group name to map of (fileGroup -> Dataframe of file group)
fileGroupList.map(fileGroupName => {
df.filter($"input_file".endsWith(s"${fileGroupName}.json"))
//perform dataframe operations
})
}
}
Check below code & Final result type is
scala.collection.immutable.Map[String,org.apache.spark.sql.DataFrame] = Map(tableBDF -> [...], tableADF -> [...], tableCDF -> [...]) where ... is your column list.
paths
.map(path => (s"${path.split("_").last.split("\\.json").head}DF",path)) // parsing file names and extracting table name and path into tuple
.groupBy(_._1) // grouping paths based same table name
.map(p => (p._1 -> p._2.map(_._2))).par // combining paths for same table into list and also .par function to execute subsequent steps in Parallel
.map(mp => {
(
mp._1, // table name
mp._2.par // For same DF multiple Files load parallel.
.map(spark.read.json(_)) // loading files s3
.reduce(_ union _) // union if same table has multiple files.
)
}
)
I have a big json file, containing 18 fields, some of which contain some other subfields. I read the file in R in the following way:
json_file <- "daily_profiles_Bnzai_20150914_20150915_20150914.json"
data <- fromJSON(sprintf("[%s]", paste(readLines(json_file), collapse=",")))
This gives me a giant list with all the fields contained in the json file. I want to make it into a data.frame and do some operations in the meantime. For example if I do:
doc_length <- data.frame(t(apply(as.data.frame(data$doc_lenght_map), 1, unlist)))
os <- data.frame(t(apply(as.data.frame(data$operating_system), 1, unlist)))
navigation <- as.data.frame(data$navigation)
monday <- data.frame(t(apply(navigation[,grep("Monday",names(data$navigation))],1,unlist)))
Monday <- data.frame(apply(monday, 1, sum))
works fine, I get what I want, with all the right subfields and then I want to join them in a final data.frame that I will use to do other operations.
Now, I'd like to do something like that on the subset of fields where I don't need to do operations. So, for example, the days of the week contained in navigation are not included. I'd like to have something like (suppose I have a data.frame df):
for(name in names(data))
{
df <- cbind(df, data.frame(t(apply(as.data.frame(data$name), 1, unlist)))
}
The above loop gives me errors. So, what I want to do is finding a way to access all the fields of the list in an automatic way, as in the loop, where the iterator "name" takes on all the fields of the list, without having to call them singularly and then doing some operations with those fields. I tried even with
for(name in names(data))
{
df <- cbind(df, data.frame(t(apply(as.data.frame(data[name]), 1, unlist)))
}
but it doesn't take all of the subfields. I also tried with
data[, name]
but it doesn't work either. So I think I need to use the "$" operator.
Is it possible to do something like that?
Thank you a lot!
Davide
Like the other commenters, I am confused, but I will throw this out to see if it might point you in the right direction.
# make mtcars a list as an example
data <- lapply(mtcars,identity)
do.call(
cbind,
lapply(
names(data),
function(name){
data.frame(data[name])
}
)
)
I have a function which returns json data as history from Version of reversion.models.
from django.http import HttpResponse
from reversion.models import Version
from django.contrib.admin.models import LogEntry
import json
def history_list(request):
history_list = Version.objects.all().order_by('-revision__date_created')
data = []
for i in history_list:
data.append({
'date_time': str(i.revision.date_created),
'user': str(i.revision.user),
'object': i.object_repr,
'field': i.revision.comment.split(' ')[-1],
'new_value_field': str(i.field_dict),
'type': i.content_type.name,
'comment': i.revision.comment
})
data_ser = json.dumps(data)
return HttpResponse(data_ser, content_type="application/json")
When I run the above snippet I get the output json as
[{"type": "fruits", "field": "colour", "object": "anyobject", "user": "anyuser", "new_value_field": "{'price': $23, 'weight': 2kgs, 'colour': 'red'}", "comment": "Changed colour."}]
From the function above,
'comment': i.revision.comment
returns json as "comment": "changed colour" and colour is the field which I have written in the function to retrieve it from comment as
'field': i.revision.comment.split(' ')[-1]
But i assume getting fieldname and value from field_dict is a better approach
Problem: from the above json list I would like to filter new_field_value and old_value. In the new_filed_value only value of colour.
Getting the changed fields isn't as easy as checking the comment, as this can be overridden.
Django-reversion just takes care of storing each version, not comparing.
Your best option is to look at the django-reversion-compare module and its admin.py code.
The majority of the code in there is designed to produce a neat side-by-side HTML diff page, but the code should be able to be re-purposed to generate a list of changed fields per object (as there can be more than one changed field per version).
The code should* include a view independent way to get the changed fields at some point, but this should get you started:
from reversion_compare.admin import CompareObjects
from reversion.revisions import default_revision_manager
def changed_fields(obj, version1, version2):
"""
Create a generic html diff from the obj between version1 and version2:
A diff of every changes field values.
This method should be overwritten, to create a nice diff view
coordinated with the model.
"""
diff = []
# Create a list of all normal fields and append many-to-many fields
fields = [field for field in obj._meta.fields]
concrete_model = obj._meta.concrete_model
fields += concrete_model._meta.many_to_many
# This gathers the related reverse ForeignKey fields, so we can do ManyToOne compares
reverse_fields = []
# From: http://stackoverflow.com/questions/19512187/django-list-all-reverse-relations-of-a-model
changed_fields = []
for field_name in obj._meta.get_all_field_names():
f = getattr(
obj._meta.get_field_by_name(field_name)[0],
'field',
None
)
if isinstance(f, models.ForeignKey) and f not in fields:
reverse_fields.append(f.rel)
fields += reverse_fields
for field in fields:
try:
field_name = field.name
except:
# is a reverse FK field
field_name = field.field_name
is_reversed = field in reverse_fields
obj_compare = CompareObjects(field, field_name, obj, version1, version2, default_revision_manager, is_reversed)
if obj_compare.changed():
changed_fields.append(field)
return changed_fields
This can then be called like so:
changed_fields(MyModel,history_list_item1, history_list_item2)
Where history_list_item1 and history_list_item2 correspond to various actual Version items.
*: Said as a contributor, I'll get right on it.