In a Vertex AI pipeline I am updating an image dataset, thus:
ds_op = gcc_aip.ImageDatasetImportDataOp(
project=project,
dataset=get_dataset_id_op.outputs['dataset'],
gcs_source=DATASET_PATH,
import_schema_uri=aiplatform.schema.dataset.ioformat.image.single_label_classification
)
I have tried adding images, updating the csv file with their path and label and uploading this to GCS. Then I run the pipe, the images are uploaded to the dataset but their labels are ignored and they are classed as Unlabeled. What am I doing wrong? TIA!
UPDATE: I am trying to use 'data_item_labels (JsonObject): Labels that will be applied to newly imported DataItems.' but I don't know what format is expected. i have tried JSON, csv, json lines etc but keep getting
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)'
errors.
UPDATE 2: finally figured out I should be passing a JSON object not a file uri, but I have tried everything I can think of and I either get JSON errors or "Invalid data_item_labels.".
So, I'm using the following code to get pandas to read my JSON text file-
f = open('C:/Users/stans/WFH Project/data.json')
data = json.load(f)
df = pd.DataFrame(data, index=[0])
f.close()
Once I execute the cell, I get
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position
1535: character maps to
I used the above coding for a smaller sample of JSON data and it worked. But, since I updated the file to include a much larger sample, I get that error.
I verified that the JSON format is correct, and I also tried in the open statement-
encoding='utf-8'
and
errors='ignore'
Both produced value errors. Any ideas? Thanks in advance for your help!
Has anyone figured out how to get CSV (or any other package) to work in RevitPythonShell? I've only been able to get Excel from Interop to work.
When I try running csv in RPS, the terminal executes and shows no error or any kind of feed back, and the file is not created either.
This is the basic code I'm trying to run which comes from a tutorial on CSV I believe.
with open('mycsv2.csv', 'w') as f:
fieldnames = ['column1', 'column2', 'column3']
thewriter = csv.DictWriter(f, fieldnames=fieldnames)
thewriter.writeheader()
for i in range(1, 10):
thewriter.writerow({'column1':'one', 'column2':'two', 'column3':'three'})
I find CSV much more user friendly and easier to understand than Interop Excel. I believe I've read its doable somewhere but of course I cant find the source now.
All help, tips, or tricks are appreciated.
I can get it to work by supplying the full path name to the open function, so it looks like (showing full path to my Documents Folder):
import csv
with open(r'C:\Users\callum\Documents\mycsv2.csv', 'w') as f:
fieldnames = ['column1', 'column2', 'column3']
thewriter = csv.DictWriter(f, fieldnames=fieldnames)
thewriter.writeheader()
for i in range(1, 10):
thewriter.writerow({'column1':'one', 'column2':'two', 'column3':'three'})
Let me know if that does the trick!
I'm getting an error while displaying a CSV file through Pyspark. I've attached the PySpark code and CSV file that I used.
from pyspark.sql import *
spark.conf.set("fs.azure.account.key.xxocxxxxxxx","xxxxx")
time_on_site_tablepath= "wasbs://dwpocblob#dwadfpoc.blob.core.windows.net/time_on_site.csv"
time_on_site = spark.read.format("csv").options(header='true', inferSchema='true').load(time_on_site_tablepath)
display(time_on_site.head(50))
The error is shown below
ValueError: Some of types cannot be determined by the first 100 rows, please try again with sampling
CSV file format is attached below
time_on_site:pyspark.sql.dataframe.DataFrame
next_eventdate:timestamp
barcode:integer
eventdate:timestamp
sno:integer
eventaction:string
next_action:string
next_deviceid:integer
next_device:string
type_flag:string
site:string
location:string
flag_perimeter:integer
deviceid:integer
device:string
tran_text:string
flag:integer
timespent_sec:integer
gg:integer
CSV file data is attached below
next_eventdate,barcode,eventdate,sno,eventaction,next_action,next_deviceid,next_device,type_flag,site,location,flag_perimeter,deviceid,device,tran_text,flag,timespent_sec,gg
2018-03-16 05:23:34.000,1998296,2018-03-14 18:50:29.000,1,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,124385,0
2018-03-17 07:22:16.000,1998296,2018-03-16 18:41:09.000,3,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,45667,0
2018-03-19 07:23:55.000,1998296,2018-03-17 18:36:17.000,6,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,1,132458,1
2018-03-21 07:25:04.000,1998296,2018-03-19 18:23:26.000,8,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,133298,0
2018-03-24 07:33:38.000,1998296,2018-03-23 18:39:04.000,10,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,46474,0
What could be done to load the CSV file successfully?
There is no issue in your syntax, it's working fine.
The issue is in your data of CSV file, where the column named as type_flag have only None(null) values, So it doesn't infer it's Datatype.
So, here are two options.
you can display the data without using head(). Like
display(time_on_site)
If you want to use head() then you need to replace the null value, at here I replaced it with the empty string('').
time_on_site = time_on_site.fillna('')
display(time_on_site.head(50))
For some reason, probably a bug, even if you provide a schema on the spark.read.schema(my_schema).csv('path') call
you get the same error on a display(df.head()) call
the display(df) works though, but it gave me a WTF moment.
I have json file but this file have weight 186 mb. I try read via python .
import json
f = open('file.json','r')
r = json.loads(f.read())
ValueError: Extra data: line 88 column 2 -...
FILE
How to open it? Help me
Your JSON file isn't a JSON file, it's several JSON files mashed together.
The first instance of this occurs in the 1630070th character:
'шова"}]}]}{"response":[{"count'
^ here
That said, jq appears to be able to handle it, so the individual parts are fine.
You'll need to split the file at the boundaries of the individual JSON objects. Try catching the JSONDecodeError and use its .colno to slice the text into correct chunks.
It should be:
r = json.loads(f)