Upload contents of CSV as new maximum stock position in Exact Online - csv

I want to upload the contents of a CSV file as new values in Exact Online data set using for instance the following SQL statement:
update exactonlinerest..ItemWarehouses
set maximumstock=0
where id='06071a98-7c74-4c26-9dbe-1d422f533246'
and maximumstock != 0
I can retrieve the contents of the file using:
select *
from files('C:\path\SQLScripts', '*.csv', true)#os fle
join read_file_text(fle.file_path)#os
But seem unable to change the multi-line text in the file_contents field to separate lines or records.
How can I split the file_contents's field into multi lines (for instance using 'update ...' || VALUE and then running it through ##mydump.sql or directly using insert into / update statement)?

For now I've been able to solve it using regular expressions and then loading generated SQL statement into the SQL engine as follows:
select regexp_replace(rft.file_contents, '^([^,]*),([^,]*)(|,.*)$', 'update exactonlinerest..ItemWarehouses set maximumstock = $1 where code = $2 and maximumstock != $1;' || chr(13), 1, 0, 'm') stmts
, 'dump2.sql' filename
from files('C:\path\SQLScripts', '*.csv', true)#os fle
join read_file_text(fle.file_path)#os rft
local export documents in stmts to "c:\path\sqlscripts" filename column filename
#c:\hantex\path\dump2.sql
However, it is error prone when I have a single quote in the article code.

Related

Lua - How to analyse a .csv export to show the highest, lowest and average values etc

Using Lua, i’m downloading a .csv file and then taking the first line and last line to help me validate the time period visually by the start and end date/times provided.
I’d also like to scan through the values and create a variety of variables e.g the highest, lowest and average value reported during that period.
The .csv is formatted in the following way..
created_at,entry_id,field1,field2,field3,field4,field5,field6,field7,field8
2021-04-16 20:18:11 UTC,6097,17.5,21.1,20,20,19.5,16.1,6.7,15.10
2021-04-16 20:48:11 UTC,6098,17.5,21.1,20,20,19.5,16.3,6.1,14.30
2021-04-16 21:18:11 UTC,6099,17.5,21.1,20,20,19.6,17.2,5.5,14.30
2021-04-16 21:48:11 UTC,6100,17.5,21,20,20,19.4,17.9,4.9,13.40
2021-04-16 22:18:11 UTC,6101,17.5,20.8,20,20,19.1,18.5,4.4,13.40
2021-04-16 22:48:11 UTC,6102,17.5,20.6,20,20,18.7,18.9,3.9,12.40
2021-04-16 23:18:11 UTC,6103,17.5,20.4,19.5,20,18.4,19.2,3.5,12.40
And my code to get the first and last line is as follows
print("Part 1")
print("Start : check 2nd and last row of csv")
local ctr = 0
local i = 0
local csvfilename = "/home/pi/shared/feed12hr.csv"
local hFile = io.open(csvfilename, "r")
for _ in io.lines(csvfilename) do ctr = ctr + 1 end
print("...... Count : Number of lines downloaded = " ..ctr)
local linenumbera = 2
local linenumberb = ctr
for line in io.lines(csvfilename) do i = i + 1
if i == linenumbera then
secondline = line
print("...... 2nd Line is = " ..secondline) end
if i == linenumberb then
lastline = line
print("...... Last line is = " ..lastline)
-- return line
end
end
print("End : Extracted 2nd and last row of csv")
But I now plan to pick a column, ideally by name (as I’d like to be able to use this against other .csv exports that are of a similar structure.) And get the .csv into a table/array...
I’ve found an option for that here - Csv file to a Lua table and access the lines as new table or function()
See below..
#!/usr/bin/lua
print("Part 2")
print("Start : Convert .csv to table")
local csvfilename = "/home/pi/shared/feed12hr.csv"
local csv = io.open(csvfilename, "r")
local items = {} -- Store our values here
local headers = {} --
local first = true
for line in csv:gmatch("[^\n]+") do
if first then -- this is to handle the first line and capture our headers.
local count = 1
for header in line:gmatch("[^,]+") do
headers[count] = header
count = count + 1
end
first = false -- set first to false to switch off the header block
else
local name
local i = 2 -- We start at 2 because we wont be increment for the header
for field in line:gmatch("[^,]+") do
name = name or field -- check if we know the name of our row
if items[name] then -- if the name is already in the items table then this is a field
items[name][headers[i]] = field -- assign our value at the header in the table with the given name.
i = i + 1
else -- if the name is not in the table we create a new index for it
items[name] = {}
end
end
end
end
print("End : .csv now in table/array structure")
But I’m getting the following error ??
pi#raspberrypi:/ $ lua home/pi/Documents/csv_to_table.lua
Part 2
Start : Convert .csv to table
lua: home/pi/Documents/csv_to_table.lua:12: attempt to call method 'gmatch' (a nil value)
stack traceback:
home/pi/Documents/csv_to_table.lua:12: in main chunk
[C]: ?
pi#raspberrypi:/ $
Any ideas on that ?
I can confirm that the .csv file is there ?
Once everything (hopefully) is in a table - I then want to be able to generate a list of variables based on the information in a chosen column, which I can then use and send within a push notification or email (which I already have the code for).
The following is what I’ve been able to create so far, but I would appreciate any/all help to do more analysis of the values within the chosen column so I can see all things like get highest, lowest, average etc.
print("Part 3")
print("Start : Create .csv analysis values/variables")
local total = 0
local count = 0
for name, item in pairs(items) do
for field, value in pairs(item) do
if field == "cabin" then
print(field .. " = ".. value)
total = total + value
count = count + 1
end
end
end
local average = tonumber(total/count)
local roundupdown = math.floor(average * 100)/100
print(count)
print(total)
print(total/count)
print(rounddown)
print("End : analysis values/variables created")
io.open returns a file handle on success. Not a string.
Hence
local csv = io.open(csvfilename, "r")
--...
for line in csv:gmatch("[^\n]+") do
--...
will raise an error.
You need to read the file into a string first.
Alternatively can iterate over the lines of a file using file:lines(...) or io.lines as you already do in your code.
local csv = io.open(csvfilename, "r")
if csv then
for line in csv:lines() do
-- ...
You're iterating over the file more often than you need to.
Edit:
This is how you could fill a data table while calculating the maxima for each row on the fly. This assumes you always have valid lines! A proper solution should verify the data.
-- prepare a table to store the minima and maxima in
local colExtrema = {min = {}, max = {}}
local rows = {}
-- go over the file linewise
for line in csvFile:lines() do
-- split the line into 3 parts
local timeStamp, id, dataStr = line:match("([^,]+),(%d+),(.*)")
-- create a row container
local row = {timeStamp = timeStamp, id = id, data = {}}
-- fill the row data
for val in dataStr:gmatch("[%d%.]+") do
table.insert(row.data, val)
-- find the biggest value so far
-- our initial value is the smallest number possible
local oldMax = colExtrema[#row.data].max or -math.huge
-- store the bigger value as the new maximum
colExtrema.max[#row.data] = math.max(val, oldMax)
end
-- insert row data
table.insert(rows, row)
end

How to create MySQL database programmatically using matlab

I am doing some data analytics using MySQL and Matlab. My program works fine if there is already an existing database present. But it doesn't work when there is not database for which I am trying to create connection. Now, what I want to do is to create a database with a fixed name if there is no database present in that name. I searched all over the internet and haven't found any option for that. May be I am missing something.
Additionally, I would like to create a table on the newly created database and store some random data on them. I can do this part. But I am stucked on the first part which is creating database programmatically using matlab.
Please note that, I have to use only matlab for this project. Any kind cooperation will be greatly appreciated.
Update
The code example is given below -
%findIfFeederExists Summary of this function goes here
% finds whether there is no. of feeders are empty or not
% Detailed explanation goes here
%Set preferences with setdbprefs.
setdbprefs('DataReturnFormat', 'dataset');
setdbprefs('NullNumberRead', 'NaN');
setdbprefs('NullStringRead', 'null');
%Make connection to database. Note that the password has been omitted.
%Using ODBC driver.
conn = database('wisedb', 'root', '');
conn.Message;
%Read data from database.
sqlQuery = 'SELECT * FROM joined_table';
%sqlQuery = 'SELECT * FROM joined_table where joined_table.`N. of Feeder` > 0';
curs = exec(conn, sqlQuery);
curs = fetch(curs);
dbMatrix = curs.data;
[row_count, ~] = size(dbMatrix);
if (row_count >= id)
val = dbMatrix(id, 3);
disp(val);
if (val.N0x2EOfFeeder > 0)
Str = strcat('Feeder is present on the id : ', num2str(id));
disp(Str);
disp(dbMatrix(id, 1:end));
else
Str = strcat('Feeder is NOT present on the id : ', num2str(id));
disp(Str);
end
else
Str = strcat('No row found for id : ', num2str(id));
disp(Str);
end
% = exec(conn,'SELECT * FROM inventoryTable');
close(curs);
%Assign data to output variable
%Close database connection.
close(conn);
%Clear variables
clear curs conn
end
You can see I can connect to an existing database using ODBC. But I am not sure how I can create a new database. What can be done for this?

Null Values by migration from MySQL to mongo

I need to migrate some tables from MySQL to mongoDB. After searching the web, for me it looks like an MySQL export to CSV and an import from that CSV to mongoDB should be the fastest and easiest way.
I'm export MySQL using that query:
select * into outfile '/tmp/feed.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY ''
from feeds;
But there is one problem.
If an MySQL field is NULL, so the MySQL export writes an \N (or \\N) into the CSV file.
By importing that file, mongoDB imports the \\N as string instead of an NULL value.
The mongoDB import option --ignoreBlanks will not work, becaouse \\N is not "blank" in mongoDB's point of view.
So my question:
1.) how could I avoid exporting NULLas \\N?
or
2.) how could mongodbimport read/interpret \\N as NULL or empty value?
By the way: it's not an option to postprocess the CSV to search and replace the \\N
On possible answer for 1.) could be the modification of the select statement: SELECT IFNULL( field1, "" ) But in this case I have to define and check every column. An export script would not so flexible, if all columns are defined in the select statement.
//Edit: while playing around with that import<->export I found an other problem: date fields, which also interpreted as strings from mongoimport
I would comment rather than adding an answer, but my reputation is still quite low...
What I've done in a project I'm working on is do the migration using a Python script. I have the exported table in a CSV. The code I use looks like this:
import csv
import zip
import pymongo
f = open( filename )
reader = csv.reader( f )
destinationItems = []
The following reads the column names (first row in CSV)
columns = next( reader )
The columns can be put in a tuple that here I call 'keys'. The code is here oblivious of the column names. Each row is then converted to a dictionary ready to be amended to remove (or do something else with -) NULLs.
keys = tuple( columns )
for property in reader:
entry = dict( zip( keys, property ) )
and the following deals with NULL; in this case I remove the entry altogether if found to be 'NULL' in the exported CSV.
entry = { k:v for k,v in entry.iteritems() if ( k in keys and ( v != 'NULL' ) or k not in keys ) }
destinationItems.append( entry )
Update the mongodb instance
mongoClient = pymongo.MongoClient()
mongoClient['mydb'].mycollection.insert( destinationItems )

Insert Data to MYSQL using Foxpro

In FoxPro using native table, I usually do this when inserting new Data.
Sele Table
If Seek(lcIndex)
Update Record
Else
Insert New Record
EndIf
If I will use MYSQL as my DataBase, what is the best and fastest way to
do this in FoxPro code using SPT? I will be updating a large number of records.
Up to 80,000 transactions.
Thanks,
Herbert
I would only take what Jerry supplied one step further. When trying to deal with any insert, update, delete with SQL pass through, it can run into terrible debugging problems based on similar principles of SQL-injection.
What if your "myValue" field had a single quote, double quote, double hyphen (indicating comment)? You would be hosed.
Parameterize your statement such as using VFP variable references, then use "?" in your sql statement to qualify which "value" should be used. VFP properly passes. This also helps on data types, such as converting numbers into string when building the "myStatement".
Also, in VFP, you can use TEXT/ENDTEXT to simplify the readability of the commands
lcSomeStringVariable = "My Test Value"
lnANumericValue = 12.34
lnMyIDKey = 389
TEXT to lcSQLCmd NOSHOW PRETEXT 1+2+8
update [YourSchems].[YourTable]
set SomeTextField = ?lcSomeStringVariable,
SomeNumberField = ?lnANumericValue
where
YourPKColumn = ?lnMyIDKey
ENDTEXT
=sqlexec( yourHandle, lcSQLCmd, "localCursor" )
You can use SQL Pass through in your Visual Foxpro application. Take a look at the SQLCONNECT() or SQLSTRINGCONNECT() for connecting to your Database. Also look at SQLEXEC() for executing your SQL statement.
For Example:
myValue = 'Test'
myHandle = SQLCONNECT('sqlDBAddress','MyUserId','MyPassword')
myStatement = "UPDATE [MySchema].[Mytable] SET myField = '" + myValue + "' WHERE myPk = 1"
=SQLEXEC(myHandle, myStatement,"myCursor")
=SQLEXEC(myHandle, "SELECT * FROM [MySchema].[Mytable] WHERE myPk = 1","myCursor")
SELECT myCursor
BROWSE LAST NORMAL
This would be your statement string for SQLEXEC:
INSERT INTO SOMETABLE
SET KEYFIELD = ?M.KEYFIELD,
FIELD1 = ?M.FIELD1
...
FIELDN = ?M.FIELDN
ON DUPLICATE KEY UPDATE
FIELD1 = ?M.FIELD1
...
FIELDN = ?M.FIELDN
Notice that the ON DUPLICATE KEY UPDATE part does not contain the key field, otherwise it would normally be identical to the insert (or not, if you want to do something else when the record already exists)

MySQL Dynamic Query Statement in Python with Dictionary

Very similar to this question MySQL Dynamic Query Statement in Python
However what I am looking to do instead of two lists is to use a dictionary
Let's say i have this dictionary
instance_insert = {
# sql column variable value
'instance_id' : 'instnace.id',
'customer_id' : 'customer.id',
'os' : 'instance.platform',
}
And I want to populate a mysql database with an insert statement using sql column as the sql column name and the variable name as the variable that will hold the value that is to be inserted into the mysql table.
Kind of lost because I don't understand exactly what this statement does, but was pulled from the question that I posted where he was using two lists to do what he wanted.
sql = "INSERT INTO instance_info_test VALUES (%s);" % ', '.join('?' for _ in instance_insert)
cur.execute (sql, instance_insert)
Also I would like it to be dynamic in the sense that I can add/remove columns to the dictionary
Before you post, you might want to try searching for something more specific to your question. For instance, when I Googled "python mysqldb insert dictionary", I found a good answer on the first page, at http://mail.python.org/pipermail/tutor/2010-December/080701.html. Relevant part:
Here's what I came up with when I tried to make a generalized version
of the above:
def add_row(cursor, tablename, rowdict):
# XXX tablename not sanitized
# XXX test for allowed keys is case-sensitive
# filter out keys that are not column names
cursor.execute("describe %s" % tablename)
allowed_keys = set(row[0] for row in cursor.fetchall())
keys = allowed_keys.intersection(rowdict)
if len(rowdict) > len(keys):
unknown_keys = set(rowdict) - allowed_keys
print >> sys.stderr, "skipping keys:", ", ".join(unknown_keys)
columns = ", ".join(keys)
values_template = ", ".join(["%s"] * len(keys))
sql = "insert into %s (%s) values (%s)" % (
tablename, columns, values_template)
values = tuple(rowdict[key] for key in keys)
cursor.execute(sql, values)
filename = ...
tablename = ...
db = MySQLdb.connect(...)
cursor = db.cursor()
with open(filename) as instream:
row = json.load(instream)
add_row(cursor, tablename, row)
Peter
If you know your inputs will always be valid (table name is valid, columns are present in the table), and you're not importing from a JSON file as the example is, you can simplify this function. But it'll accomplish what you want to accomplish. While it may initially seem like DictCursor would be helpful, it looks like DictCursor is useful for returning a dictionary of values, but it can't execute from a dict.