CSV Import task status in Netsuite - csv

I am creating a task using the Scriptlet as below and submitting the task. This task may need 30 Sec to 5 min for completion. Now once I submit the job I dont have a control except the task id. I want to know the status or the message once the cob is finished or completed.
var cvsScriptTask = task.create({
taskType : task.TaskType.CSV_IMPORT ,
mappingId : cvsTask.mappingId ,
importFile : cvsFileObj ,
name : cvsFileObj.name ,
});
var csvImportTaskId = cvsScriptTask.submit();
Can I get the status of this task/job from some table/record ?

I think you are looking for Setup>Import/Export>View CSV Import Status

You can get the task status programmatically by using the N/task module and call task.checkStatus(taskID). See https://docs.oracle.com/en/cloud/saas/netsuite/ns-online-help/section_4345805891.html
Note that status COMPLETE only means that the CSV Import is done, not that it was successful. To see if all rows were imported you need to either check the UI Setup>Import/Export>View CSV Import Status (like #vVinceth suggested) or you can use SuiteQL to query SentEmail and parse the text in that email…
And even if you do find the SentEmail for the corresponding CSV Import it still doesn't say why some rows failed. That information is only available via the UI as far as I know. Very frustrating!

Related

Dash - update data Table Live is very long

I've just developped an app with two containers:
Worker container is fetching data from physical equipments and stores it into a CSV.
Api container is a Dash app which is reading the CSV and displays it in a DashTable.
There is a scheduler which runs the Worker every 10 minutes, so I get an updated CSV file every 10 minutes.
I'm using an interval component to read the CSV every second and update the Table, so the user always have an updated table. Also, I read the modification date of the csv file, then I can inform the user about the time of the last update. I also inform him when the worker is in process of fetching the data (thanks to the docker kpi).
It used to work for 3 weeks very well. I also deployed the app last week with Gunicorn and it worked well. The data was displayed instantly. But today, there is a bug I don't know from where it's coming:
Each time I open the app, the data in the table and the time of the last update are taking between 15 and 60 seconds to be displayed.
I saw in the logs that everything is working well but there's only a bug in the display.
Also, once the data and the time is displayed, 10 minutes later, after the new data is updated, same story: The new data is taking long time to be displayed.
Here is the part of my code that deals with the display of data in the table and the time:
from dash import Dash, dash_table, dcc, html, State
from dash.dependencies import Input, Output
import dash_bootstrap_components as dbc
import pandas as pd
import docker
from datetime import datetime
import os
import pytz
app = Dash(external_stylesheets=[dbc.themes.BOOTSTRAP])
server = app.server
df = pd.read_csv("./data.csv")
df = df.fillna("NaN")
app.title = "Host Tracer"
# Layout
app.layout = html.Div(children=[
html.P(id='time-infos'),
dash_table.DataTable(
id='datatable-interactivity',
columns=[{'name': i, 'id': i} for i in df.columns],
filter_action="native",
sort_action="native",
sort_mode="multi",
selected_columns=[],
selected_rows=[],
page_action="native",
page_current= 0,
page_size= 40,
style_data={'text-align':'center'},
),
dcc.Interval(
id='interval-component',
interval=1*1000, # in milliseconds
n_intervals=0
),
]
)
def last_modification_time_of_csv(file):
modTimesinceEpoc = os.path.getmtime(file)
return datetime.fromtimestamp(modTimesinceEpoc).astimezone(pytz.timezone('Europe/Paris')).strftime("%d/%m/%Y at %H:%M:%S")
# Display data in table every second
#app.callback(
Output('time-infos', 'children'),
Output('datatable-interactivity', 'data'),
Input('interval-component', 'n_intervals'))
def update_table(n):
df = pd.read_csv("./data.csv")
df = df.fillna("NaN")
date_time = last_modification_time_of_csv("./data.csv")
infos = ""
client = docker.from_env()
container = client.containers.get('container_worker')
if container.attrs["State"]["Status"] == "running":
infos = f'⚠️ Worker in process...'
else:
infos = f'last updated data: ' + date_time
return infos, df.to_dict('records')
if __name__ == '__main__':
app.run_server(debug=True, host='0.0.0.0')
First, I was thinking that it was a problem with Gunicorn. I replaced it by Flask and the problem is still here.
Maybe someone have an idea of where the issue is coming from?
I specify that I have a CSV of 15000 lines.
Thank you,
EDIT
I just modified the time of the interval from 1*1000 to 1*2000 and it's working. Incredible. I really don't understand why.
I think I should rethink my mechanism. It's too much to rewrite the data in my table every 1 or 2 seconds. The thing is that I don't know when the data is updated exactly because I also let the user to fetch the data when clicking on a button. That's why I'm refreshing every second.
Someone have an idea of how I could avoid this refreshing and refresh only at the good time?
Thanks
EDIT 2
Even with the 2 seconds interval, it's sometimes taking a lot of time to load the data. I really don't understand what's the matter. Thanks

Best way to display MySql data in Tkinter GUI

Just wondering if there is a better way to display MySql data to users of my app.
Basically I store look-up data then put it in a pop-up window for viewing:
for row in all_reinforcement_data:
r_total = ("Total number of reinforcement entries", mycursor.rowcount)
r_id = ("\n\nId", row[0])
messagebox.showinfo("Reinforcement Data Results", r_total + r_id)
Which doesn't look too polished but gives me what I want:
Is there any other ways of showing the user the data. In some form they could copy and paste from, ideally an excell spreadsheet or something similar.
In a messagebox I don't believe you could do it. You could attempt to do it in a normal window with an entry that you could only copy out of, similarly to this question.
For example, you could do this to show the rows in a simple window:
from tkinter import *
row_info = Tk()
row_info.title("Reinforcement Data Results")
title = Label(text="Total number of reinforcement entries:")
title.pack()
data = Entry(row_info, borderwidth=0, justify='center')
data.insert(END, mycursor.row_count)
data.pack()
data.configure(state="readonly")
close = Button(row_info, text="Ok", command=row_info.destroy)
close.pack()
row_info.mainloop()

How to get the csv alias for a thread in Jmeter sharing mode current thread?

Here is my test plan structure.
User Login
Runtime Controller
while controller !<> EOF
CSV dataset (items to add)
search and add to cart
Click cart.
Proceed to check out
Order submit.
Beanshell sampler to close CSV
User Logout.
I want each thread to read the csv till EOF and add these items to cart, hence I used the sharing
mode as current thread.Since add to cart and order submission is getting repeated for the test
duration I am closing the file and resetting the variable after order submit so that next iteration
will again start to read from beginning.
The beanshell code is :
import org.apache.jmeter.## Heading ##services.FileServer;
FileServer.getFileServer().closeFile("Scripts_Helan\\DSOrderParts.csv");
String pPartNum = vars.get("pPartNum");
vars.put("pPartNum", "");
But when I run the test Jmeter log is showing file name as
Stored: Scripts_Helan\DSOrderParts.csv Alias: Scripts_Helan\DSOrderParts.csv#1309262272
Don't I have to use the Alias in closeFile? How can I get it?[enter image description here][1]
[enter image description here][1]
I don't exactly get why you are using beanshell code here.
You can handle the "start all over when done" part with setting up the Thread Group accordingly
You can handle that "stop thread at end of file" part with setting up the CSV Data Set Config accordingly
Please clarify what makes handling the file in beanshell code neccessary.

Flask SQL-Alchemy query is returning null for data that exists in my database. What could be the cause

My python program is meant to query my MySQL database for a record. The record exists in the database and contains data but the program returns null values. The table that gets queried is titled Market. In that table there is a column titled market_cap and a column titled volume. When I use MySQLWorkbench to query my database, the result shows that there is data in the columns. However, the program receives null.
Attached are two images (links, because I need to earn 10 reputation points to embed images in a post):
MySql database column image
shows a view of the column in my database that is having issues.
From the image, you can see that the data I need exists in my database.
Code with results from Pycharm debugger
Before running the debugger, I set a breakpoint right after the line where
the code queries the database for an object. Image two shows the output I
received when the code queried the database.
Screenshot of the Market Model
Screenshot of the solution I found out that converting the market cap(market_cap) before adding it to the dictionary(price_map) returns the correct value. You can see it in line 138.
What could cause existent data in a record to be returned as null?
import logging
from flask_restful import Resource
from api.resources.util.date_util import pretty_date_to_epoch,
epoch_to_pretty_date
from common.decorators import log_exception
from db.models import db, Market
log = logging.getLogger(__name__)
def map_date_to_price():
buy_date_list = ["2015-01-01", "2015-02-01", "2015-03-01"]
sell_date_list = ["2014-12-19", "2014-01-10", "2015-01-20",
"2016-01-10"]
date_list = buy_date_list + sell_date_list
market_list = []
price_map = {}
for day in date_list:
market_list.append(db.session.query(Market).
filter(Market.pretty_date == day).first())
for market in market_list:
price_map[market.pretty_date] = market.market_cap
return price_map
The two fields that are (apparently) being retrieved as null are both db.Numeric. http://docs.sqlalchemy.org/en/latest/core/type_basics.html notes that these are, by default, backed up by a decimal.Decimal object, which I'll bet can't be converted to JSON, so what comes back form Market.__repr__() will show them as null.
I would try adding asdecimal=False to the two db.Numeric() calls in Market.

processing multiple files in business objects data services

I am new to the Business Objects Data services.
I have to run a dataflow reading from a file. Filename should be read based on wild chars like Platform. And I want to run the dataflow only if the file exists, if file is not present , it should not error out or should not do anything but it should just move on to the next dataflow or workflow in the job.
I tried below code to check if the file exists as built_in function File_Exists cannot check the file based on wild chars.
*$FILEEXISTSFLAG= exec('/bin/ksh',' "ls xxxxxx/Platform.csv',8);*
My intention is based on the value assigned to $FILEEXISTSFLAG from above code, I will decide whether to execute the data flow or not (if $FILEEXISTSFLAG is null do nothing otherwise execute the data flow ) but its retrieving below output.
*ls: cannot access /xxxxxx/Platform.csv: No such file*
Is there any other way to achieve this?
I was able to solve the above problem by using the index function.
$FILEEXISTSFLAG is containing a value like "ls: cannot access Platform: No such file or directory ". So, I have used the index function as below. So if the output is not null for below index function, it will execute the dataflow, otherwise it will do nothing.
index( $FILEEXISTSFLAG , 'No such file',1)
Thanks,
Phani.