RFC-enabled function module to update physical samples - sap-erp

I need to update some fields of Physical samples in SAP ERP:
List of columns which are in the table QPRS:
ABINF: Storage Information
ABDAT: Storage Deadline
ABORT: Storage Location
List of fields which correspond to statuses (table JEST):
Sample Was Stored: status I0363 (short code in Status History: "STRD")
Sample Consumed/Destroyed: status I0362 (short code in Status History: "USED")
Is there a RFC-enabled function module to update these fields?
Thanks.

As far as I know there is no BAPI for updating storage data. Anyhow but you will need ABAP development for this, QPRS_QPRS_STORAGE_UPDATE is the FM you can copy into Z one and make it remote-enabled:
DATA: i_qprs TYPE qprs,
i_lgort TYPE qprs-lgort VALUE 'Z07',
i_abort TYPE qprs-abort VALUE '1',
i_abdau TYPE qprs-abdau VALUE 10,
i_abdat TYPE qprs-abdat VALUE '20200510',
i_abinf TYPE qprs-abinf VALUE 'info 1st',
i_aufbx TYPE rqprs-aufbx VALUE 'first storage',
i_prnvx TYPE rqprs-prnvx VALUE abap_true,
i_qprs_cust TYPE qprs_cust,
e_qprs_new TYPE qprs,
e_aufbx TYPE rqprs-aufbx,
e_prnvx TYPE rqprs-prnvx.
i_qprs-phynr = '000900000054'.
CALL FUNCTION 'QPRS_QPRS_STORAGE_UPDATE'
EXPORTING
i_qprs = i_qprs
i_lgort = i_lgort
i_abort = i_abort
i_abdau = i_abdau
i_abdat = i_abdat
i_abinf = i_abinf
i_aufbx = i_aufbx
i_prnvx = i_prnvx
i_qprs_cust = i_qprs_cust
IMPORTING
e_qprs_new = e_qprs_new
e_aufbx = e_aufbx
e_prnvx = e_prnvx
EXCEPTIONS
sample_locked = 1
locking_error = 2
sample_not_found = 3
abort_not_found = 4
sample_already_changed = 5.

Related

Jsony newHook has `SIGSEGV: Illegal storage access. (Attempt to read from nil?)` when deserializing into ref-objects

I am writing a web-application and am deserializing via jsony into norm-model-object types.
Norm-model-types are always ref objects. Somehow my code which is very similar to the default example in jsony's github documentation does not compile. Instead I receive the error SIGSEGV: Illegal storage access. (Attempt to read from nil?).
See here my code sample
import std/[typetraits, times]
import norm/[pragmas, model]
import jsony
const OUTPUT_TIME_FORMAT* = "yyyy-MM-dd'T'HH:mm:ss'.'ffffff'Z'"
type Character* {.tableName: "wikientries_character".} = ref object of Model
name*: string
creation_datetime*: DateTime
update_datetime*: DateTime
proc parseHook*(s: string, i: var int, v: var DateTime) =
##[ jsony-hook that is automatically called to convert a json-string to datetime
``s``: The full JSON string that needs to be serialized. Your type may only be a part of this
``i``: The index on the JSON string where the next section of it starts that needs to be serialized here
``v``: The variable to fill with a proper value]##
var str: string
s.parseHook(i, str)
v = parse(s, OUTPUT_TIME_FORMAT, utc())
proc newHook*(entry: var Character) =
let currentDateTime: DateTime = now()
entry.creation_datetime = currentDateTime # <-- This line is listed as the reason for the sigsev
entry.update_datetime = currentDateTime
entry.name = ""
var input = """ {"name":"Test"} """
let c = input.fromJson(Character)
I don't understand what the issue appears to be here, as the jsony-example on its github page looks pretty similar:
type
Foo5 = object
visible: string
id: string
proc newHook*(foo: var Foo5) =
# Populates the object before its fully deserialized.
foo.visible = "yes"
var s = """{"id":"123"}"""
var v = s.fromJson(Foo5)
doAssert v.id == "123"
doAssert v.visible == "yes"
How can I fix this?
The answer lies in the fact that norm-object-types are ref objects, not normal (value) objects (Thanks to ElegantBeef, Rika and Yardanico from the nim-discord to point this out)! If you do not explicitly 'create' a ref-type at one point, the memory for it is never allocated since the code doesn't do the memory allocation for you unlike with value types!
Therefore, you must initialize/create a ref-object first before you can use it, and Jsony does not take over initialization for you!
The correct way to write the above newHook thus looks like this:
proc newHook*(entry: var Character) =
entry = new(Character)
let currentDateTime: DateTime = now()
entry.creation_datetime = currentDateTime
entry.update_datetime = currentDateTime
entry.name = ""

Lua - How to analyse a .csv export to show the highest, lowest and average values etc

Using Lua, i’m downloading a .csv file and then taking the first line and last line to help me validate the time period visually by the start and end date/times provided.
I’d also like to scan through the values and create a variety of variables e.g the highest, lowest and average value reported during that period.
The .csv is formatted in the following way..
created_at,entry_id,field1,field2,field3,field4,field5,field6,field7,field8
2021-04-16 20:18:11 UTC,6097,17.5,21.1,20,20,19.5,16.1,6.7,15.10
2021-04-16 20:48:11 UTC,6098,17.5,21.1,20,20,19.5,16.3,6.1,14.30
2021-04-16 21:18:11 UTC,6099,17.5,21.1,20,20,19.6,17.2,5.5,14.30
2021-04-16 21:48:11 UTC,6100,17.5,21,20,20,19.4,17.9,4.9,13.40
2021-04-16 22:18:11 UTC,6101,17.5,20.8,20,20,19.1,18.5,4.4,13.40
2021-04-16 22:48:11 UTC,6102,17.5,20.6,20,20,18.7,18.9,3.9,12.40
2021-04-16 23:18:11 UTC,6103,17.5,20.4,19.5,20,18.4,19.2,3.5,12.40
And my code to get the first and last line is as follows
print("Part 1")
print("Start : check 2nd and last row of csv")
local ctr = 0
local i = 0
local csvfilename = "/home/pi/shared/feed12hr.csv"
local hFile = io.open(csvfilename, "r")
for _ in io.lines(csvfilename) do ctr = ctr + 1 end
print("...... Count : Number of lines downloaded = " ..ctr)
local linenumbera = 2
local linenumberb = ctr
for line in io.lines(csvfilename) do i = i + 1
if i == linenumbera then
secondline = line
print("...... 2nd Line is = " ..secondline) end
if i == linenumberb then
lastline = line
print("...... Last line is = " ..lastline)
-- return line
end
end
print("End : Extracted 2nd and last row of csv")
But I now plan to pick a column, ideally by name (as I’d like to be able to use this against other .csv exports that are of a similar structure.) And get the .csv into a table/array...
I’ve found an option for that here - Csv file to a Lua table and access the lines as new table or function()
See below..
#!/usr/bin/lua
print("Part 2")
print("Start : Convert .csv to table")
local csvfilename = "/home/pi/shared/feed12hr.csv"
local csv = io.open(csvfilename, "r")
local items = {} -- Store our values here
local headers = {} --
local first = true
for line in csv:gmatch("[^\n]+") do
if first then -- this is to handle the first line and capture our headers.
local count = 1
for header in line:gmatch("[^,]+") do
headers[count] = header
count = count + 1
end
first = false -- set first to false to switch off the header block
else
local name
local i = 2 -- We start at 2 because we wont be increment for the header
for field in line:gmatch("[^,]+") do
name = name or field -- check if we know the name of our row
if items[name] then -- if the name is already in the items table then this is a field
items[name][headers[i]] = field -- assign our value at the header in the table with the given name.
i = i + 1
else -- if the name is not in the table we create a new index for it
items[name] = {}
end
end
end
end
print("End : .csv now in table/array structure")
But I’m getting the following error ??
pi#raspberrypi:/ $ lua home/pi/Documents/csv_to_table.lua
Part 2
Start : Convert .csv to table
lua: home/pi/Documents/csv_to_table.lua:12: attempt to call method 'gmatch' (a nil value)
stack traceback:
home/pi/Documents/csv_to_table.lua:12: in main chunk
[C]: ?
pi#raspberrypi:/ $
Any ideas on that ?
I can confirm that the .csv file is there ?
Once everything (hopefully) is in a table - I then want to be able to generate a list of variables based on the information in a chosen column, which I can then use and send within a push notification or email (which I already have the code for).
The following is what I’ve been able to create so far, but I would appreciate any/all help to do more analysis of the values within the chosen column so I can see all things like get highest, lowest, average etc.
print("Part 3")
print("Start : Create .csv analysis values/variables")
local total = 0
local count = 0
for name, item in pairs(items) do
for field, value in pairs(item) do
if field == "cabin" then
print(field .. " = ".. value)
total = total + value
count = count + 1
end
end
end
local average = tonumber(total/count)
local roundupdown = math.floor(average * 100)/100
print(count)
print(total)
print(total/count)
print(rounddown)
print("End : analysis values/variables created")
io.open returns a file handle on success. Not a string.
Hence
local csv = io.open(csvfilename, "r")
--...
for line in csv:gmatch("[^\n]+") do
--...
will raise an error.
You need to read the file into a string first.
Alternatively can iterate over the lines of a file using file:lines(...) or io.lines as you already do in your code.
local csv = io.open(csvfilename, "r")
if csv then
for line in csv:lines() do
-- ...
You're iterating over the file more often than you need to.
Edit:
This is how you could fill a data table while calculating the maxima for each row on the fly. This assumes you always have valid lines! A proper solution should verify the data.
-- prepare a table to store the minima and maxima in
local colExtrema = {min = {}, max = {}}
local rows = {}
-- go over the file linewise
for line in csvFile:lines() do
-- split the line into 3 parts
local timeStamp, id, dataStr = line:match("([^,]+),(%d+),(.*)")
-- create a row container
local row = {timeStamp = timeStamp, id = id, data = {}}
-- fill the row data
for val in dataStr:gmatch("[%d%.]+") do
table.insert(row.data, val)
-- find the biggest value so far
-- our initial value is the smallest number possible
local oldMax = colExtrema[#row.data].max or -math.huge
-- store the bigger value as the new maximum
colExtrema.max[#row.data] = math.max(val, oldMax)
end
-- insert row data
table.insert(rows, row)
end

Kaggle competition submission error : The value '' in the key column '' has already been defined

This is my first time participating in a kaggle competition and I'm having trouble submitting my result table. I made my model using gbm and made a prediction table like below. the submission file has 2 column named 'fullVisitorId' and 'PredictedLogRevenue') as any other kaggle competition cases.
pred_oob = predict(object = model_gbm, newdata = te_df, type = 'response')
mysub = data.frame(fullVisitorId = test$fullVisitorId, Pred = pred_oob)
mysub = mysub %>%
group_by(fullVisitorId) %>%
summarise(Predicted = sum(Pred))
submission = read.csv('sample_submission.csv')
mysub = submission %>%
left_join(mysub, by = 'fullVisitorId')
mysub$PredictedLogRevenue = NULL
names(mysub) = names(submission)
But when I try to submit the file, I got the 'fail' message saying ...
ERROR: The value '8.893887e+17' in the key column 'fullVisitorId' has already been defined (Line 549026, Column 1)
ERROR: The value '8.895317e+18' in the key column 'fullVisitorId' has already been defined (Line 549126, Column 1)
ERROR: The value '8.895317e+18' in the key column 'fullVisitorId' has already been defined (Line 549127, Column 1)
Not just 3 lines, but 8 more lines like this.
I have no idea what I did wrong. I also checked other kernels but couldn't find the answer. Please...help!!
This issue was because fullVisitorId was numeric instead of character, so It dropped all the leading zeros. Therefore, using read.csv() with colClases argument or fread() can make it work.
I left this just because there could be someone else who are having the similar trouble like me
For creating submission dataframe, the easiest way is this
subm_df = pd.read_csv('../input/sample_submission.csv')
subm_df['PredictedLogRevenue'] = <your prediction array>
subm_df.to_csv('Subm_1.csv', index=False)
Noe this is assuming your sample_submission.csv has all fullVisitorId, which it usually does in Kaggle. Following this, I have never faced any issues.

Unable to create a constant value of type 'System.Object'. when trying to return a single value from LINQ query

The following code examples both result in this run-time error:
Unable to create a constant value of type 'System.Object'. Only primitive types or enumeration types are supported in this context.
var venuename = (from v in db.Venues
where v.PropGroupID.Equals(propgroup)
select v.VenueName).SingleOrDefault();
var venuename = (from v in db.Venues
where v.PropGroupID.Equals(propgroup)
select v.VenueName).FirstOrDefault();
Removing "SingleOrDefault()" (or FirstOrDefault) results in this compile-time error:
Cannot implicitly convert type 'System.Linq.IQueryable' to 'string'
var venuename = (from v in db.Venues
where v.PropGroupID.Equals(propgroup)
select v.VenueName);
I want to return the text from the field VenueName (it is in SQL Server DB with data type = nvarchar(75)). PropGroupID is currently unique (but that is not enforced). Suggestions?
You can get the value from:
var venueName = db.Venues
.Where(x => x.PropGroupId.Equals(propgroup))
.FirstOrDefault(x => x.VenueName);
About removing the "FirstOrDefault", you can get more information about lazy loading:
Entity Framework Loading Related
Entities
LINQ To Entities and Lazy
Loading

Receiving warning messages with EZAPI EzDerivedColumn and input columns

I am working with EZApi to assist in creating a package to stage data for transformation. It is working in terms of data movement. When opening the package in the designer however there are warning messages surrounding the Derived Column and the InputColumns being set to read only.
Warning 148 Validation warning. Staging TableName:
{AA700319-FC05-4F06-A877-599E826EA833}: The "Additional
Columns.Inputs[Derived Column Input].Columns[DataSourceID]" on
"Additional Columns" has usage type READONLY, but is not referenced by
an expression. Remove the column from the list of available input
columns, or reference it in an expression. StageFull.dtsx 0 0
I can manually change them in the designer to be Read/Write or unselect them and the warning goes away. I am unable to get this to work programmatically however.
I have tried removing the columns from the metadata which works but doesn't remove them from the component so the columns are still created in the xml.
XML section
<externalMetadataColumn refId="Package\Full\Staging TableName\DestinationStaging TableName.Inputs[OLE DB Destination Input].ExternalColumns[DataSourceID]" dataType="i4" name="DataSourceID" />
When I try to go to the underlying object and delete the column using component.DeleteInput(id) I get an error message stating that the input column cannot be removed.
0xC0208010
-1071611888
DTS_E_CANTDELETEINPUT
An input cannot be deleted from the inputs collection.
Here is the code I am using to create a data flow task with an OLEDB Source, Derived Column, and OLE DB Destination.
Note that the input columns are not present until after the derived column is attached to the Source: dc.AttachTo(source);
public class EzMyDataFlow : EzDataFlow
{
public EzMyDataFlow(EzContainer parent, EzSqlOleDbCM sourceconnection,
EzSqlOleDbCM destinationconnection, string destinationtable, string sourcecomannd, string dataflowname)
: base(parent)
{
Name = dataflowname;
EzOleDbSource source = new EzOleDbSource(this);
source.Connection = sourceconnection;
source.SqlCommand = sourcecomannd;
source.AccessMode = AccessMode.AM_SQLCOMMAND;
source.Name = string.Format("Source_{0}", dataflowname);
EzDerivedColumn dc = new EzDerivedColumn(this);
dc.Name = "Additional Columns";
// Setup DataSourceID
string columnName = DBSchema.ReportFoundationalColumns.DataSourceID;
dc.InsertOutputColumn(columnName);
dc.SetOutputColumnDataTypeProperties(columnName, DataType.DT_I4, 0, 0, 0, 0);
var c = dc.OutputCol(columnName);
var property = c.CustomPropertyCollection["Expression"];
property.Name = "Expression";
property.Value = "#[TM::SourceDatabaseID]";
property = c.CustomPropertyCollection["FriendlyExpression"];
property.Name = "FriendlyExpression";
property.Value = "#[TM::SourceDatabaseID]";
dc.AttachTo(source);
EzOleDbDestination destination = new EzOleDbDestination(this);
destination.Table = destinationtable;
destination.Connection = destinationconnection;
destination.Name = string.Format("Destination{0}", dataflowname);
destination.AttachTo(dc);
}
}