Debugging PL/SQL Collections using PL/SQL Developer - plsqldeveloper

How can I see in the debug window (PL/SQL Developer, version 10.0.5.1710) to look up information on an array of data in a collection: the nesting hierarchy, elements with data types and their values without listing all its elements separately?
DECLARE
TYPE T_userinfo IS RECORD(
surname VARCHAR2(8),
name VARCHAR2(6),
sex VARCHAR2(6)
);
TYPE T_group_tab IS TABLE OF T_userinfo INDEX BY VARCHAR2(6);
TYPE T_class_tab IS TABLE OF T_group_tab INDEX BY PLS_INTEGER;
team_tab T_class_tab;
BEGIN
team_tab(0)('group1').surname := 'Bradley';
team_tab(0)('group1').name := 'Brian';
team_tab(0)('group1').sex := 'male';
team_tab(1)('group2').surname := 'Johnston';
team_tab(1)('group2').name := 'Hilary';
team_tab(1)('group2').sex := 'female';
END;
I want to see in the debug window something like that:
0 =>
'group1' =>
'surname' => VARCHAR2 'Bradley'
'name' => VARCHAR2 'Brian'
'sex' => VARCHAR2 'male'
1 =>
'group2' =>
'surname' => VARCHAR2 'Johnston'
'name' => VARCHAR2 'Hilary'
'sex' => VARCHAR2 'female'
Debugging PL/SQL Collections

It seems it is not possible at the moment. According to the newest user guide I found on the web (page 25), PLSQL developer can show collection of scalar type on hovering mouse on variable or rightclicking and choosing View collection variable from context menu.
It is also possible to hover mouse on variable of record type to show values of record fields or even send these values into watch window:
team_tab T_class_tab;
team_tab0 T_group_tab;
team_tab0g T_userinfo;
...
team_tab(0)('group1').surname := 'Bradley';
team_tab(0)('group1').name := 'Brian';
team_tab(0)('group1').sex := 'male';
team_tab0 := team_tab(0);
team_tab0g := team_tab0('group1'); -- hover on team_tab0g
For complex types you have to make up custom function to dump data in format you require.

Related

Parsing json5/js object literals in Ada

New to Ada. Trying to work with some objects like the following {name:'Hermann',age:33} in an a project and I'd rather not write my own json parser for this. Is there either:
a way for me to configure Gnatcolls.JSON to parse and write these objects
or
a different library I can use with json5 or javascript object literal support?
I wrote a JSON parser from the spec pretty quickly for work while doing other things, and it took a day/day-and-a-half; it's not particularly hard and I'll see about posting it to github or something.
However, JSON5 is different enough that re-implementing it would be on the order of the same difficulty as writing some sort of adaptor. Editing the parser to accept the new constructs might be more difficult than one might anticipate, as the IdentifierName allowed as a key means that you can't simply chain together the sequence (1) "get-open-brace", (2) "consume-whitespace", (3) "get-a-string", (4) "consume-whitespace", (5) "get-a-colon, (6) "consume-whitespace", (7) "get-JSON-object", (8) "consume-whitespace", (9) "get-a-character; if comma, go to #1, otherwise it should be an end-brace".
Perhaps one thing that makes things easier is to equate the stream- and string-operations so that you only have one production-method for your objects; there are three main ways to do this:
Make a generic such that it takes a string and gives the profile for the stream-operation.
Make a pair of overloaded functions that provide the same interface.
Make a stream that is a string; the following does this:
Package Example is
-- String_Stream allows uses a string to buffer the underlying stream,
-- it may be initialized with content from a string or a given length for
-- the string underlying the stream.
--
-- This is intended for the construction and consumption of string-data
-- using stream-operations
Type String_Stream(<>) is new Ada.Streams.Root_Stream_Type with Private;
Subtype Root_Stream_Class is Ada.Streams.Root_Stream_Type'Class;
-- Create a String_Stream.
Function "+"( Length : Natural ) return String_Stream;
Function "+"( Text : String ) return String_Stream;
Function "+"( Length : Natural ) return not null access Root_Stream_Class;
Function "+"( Text : String ) return not null access Root_Stream_Class;
-- Retrieve the remaining string-data; the (POSITION..DATA'LENGTH) slice.
Function "-"( Stream : String_Stream ) return String;
-- Retrieve the string-data; the (1..DATA'LENGTH) slice.
Function Data(Stream : String_Stream ) return String;
Private
Pragma Assert( Ada.Streams.Stream_Element'Size = String'Component_Size );
Overriding
procedure Read
(Stream : in out String_Stream;
Item : out Ada.Streams.Stream_Element_Array;
Last : out Ada.Streams.Stream_Element_Offset);
Overriding
procedure Write
(Stream : in out String_Stream;
Item : Ada.Streams.Stream_Element_Array);
Type String_Stream(Length : Ada.Streams.Stream_Element_Count) is
new Ada.Streams.Root_Stream_Type with record
Data : Ada.Streams.Stream_Element_Array(1..Length);
Position : Ada.Streams.Stream_Element_Count;
End record;
End Example;
With implementation of:
Package Body Example is
Use Ada.Streams;
-------------------
-- INITALIZERS --
-------------------
Function From_String( Text : String ) return String_Stream
with Inline, Pure_Function;
Function Buffer ( Length : Natural ) return String_Stream
with Inline, Pure_Function;
--------------
-- R E A D --
--------------
Procedure Read
(Stream : in out String_Stream;
Item : out Ada.Streams.Stream_Element_Array;
Last : out Ada.Streams.Stream_Element_Offset) is
Use Ada.IO_Exceptions, Ada.Streams;
Begin
-- When there is a read of zero, do nothing.
-- When there is a read beyond the buffer's bounds, raise an exception.
-- Note: I've used two cases here-
-- 1) when the read is greater than the buffer,
-- 2) when the read would go beyond the buffer.
-- Finally, read the given amount of data and update the position.
if Item'Length = 0 then
null;
elsif Item'Length > Stream.Data'Length then
Raise End_Error with "Request is larger than the buffer's size.";
elsif Stream_Element_Offset'Pred(Stream.Position)+Item'Length > Stream.Data'Length then
Raise End_Error with "Buffer will over-read.";
else
Declare
Subtype Selection is Stream_Element_Offset range
Stream.Position..Stream.Position+Stream_Element_Offset'Pred(Item'Length);
Begin
Item(Item'Range):= Stream.Data(Selection);
Stream.Position:= Stream_Element_Offset'Succ(Selection'Last);
Last:= Selection'Last;--Stream.Position;
End;
end if;
End Read;
-----------------
-- W R I T E --
-----------------
Procedure Write
(Stream : in out String_Stream;
Item : Ada.Streams.Stream_Element_Array) is
Begin
Declare
Subtype Selection is Stream_Element_Offset range
Stream.Position..Stream.Position+Stream_Element_Offset'Pred(Item'Length);
Begin
Stream.Data(Selection):= Item(Item'Range);
Stream.Position:= Stream_Element_Offset'Succ(Selection'Last);
End;
End Write;
----------------------------------
-- INITALIZER IMPLEMENTATIONS --
----------------------------------
-- Create a buffer of the given length, zero-filled.
Function Buffer( Length : Natural ) return String_Stream is
Len : Constant Ada.Streams.Stream_Element_Offset :=
Ada.Streams.Stream_Element_Offset(Length);
Begin
Return Result : Constant String_Stream:=
(Root_Stream_Type with
Position => 1,
Data => (1..Len => 0),
Length => Len
);
End Buffer;
-- Create a buffer from the given string.
Function From_String( Text : String ) return String_Stream is
Use Ada.Streams;
Subtype Element_Range is Stream_Element_Offset range
Stream_Element_Offset(Text'First)..Stream_Element_Offset(Text'Last);
Subtype Constrained_Array is Stream_Element_Array(Element_Range);
Subtype Constrained_String is String(Text'Range);
Function Convert is new Ada.Unchecked_Conversion(
Source => Constrained_String,
Target => Constrained_Array
);
Begin
Return Result : Constant String_Stream:=
(Root_Stream_Type with
Position => Element_Range'First,
Data => Convert( Text ),
Length => Text'Length
);
End From_String;
-- Classwide returning renames, for consistancy/overload.
Function To_Stream( Text : String ) return Root_Stream_Class is
( From_String(Text) ) with Inline, Pure_Function;
Function To_Stream( Length : Natural ) return Root_Stream_Class is
( Buffer(Length) ) with Inline, Pure_Function;
----------------------------
-- CONVERSION OPERATORS --
----------------------------
-- Allocating / access-returning initalizing operations.
Function "+"( Length : Natural ) return not null access Root_Stream_Class is
( New Root_Stream_Class'(To_Stream(Length)) );
Function "+"( Text : String ) return not null access Root_Stream_Class is
( New Root_Stream_Class'(To_Stream(Text)) );
-- Conversion from text or integer to a stream; renaming of the initalizers.
Function "+"( Text : String ) return String_Stream renames From_String;
Function "+"( Length : Natural ) return String_Stream renames Buffer;
-- Convert a given Stream_Element_Array to a String.
Function "-"( Data : Ada.Streams.Stream_Element_Array ) Return String is
Subtype Element_Range is Natural range
Natural(Data'First)..Natural(Data'Last);
Subtype Constrained_Array is Stream_Element_Array(Data'Range);
Subtype Constrained_String is String(Element_Range);
Function Convert is new Ada.Unchecked_Conversion(
Source => Constrained_Array,
Target => Constrained_String
);
Begin
Return Convert( Data );
End "-";
----------------------
-- DATA RETRIEVAL --
----------------------
Function "-"( Stream : String_Stream ) return String is
Begin
Return -Stream.Data(Stream.Position..Stream.Length);
End "-";
Function Data(Stream : String_Stream ) return String is
Begin
Return -Stream.Data;
End Data;
End Example;

Ada: How to Iterate over a private map?

Consider the following:
with Ada.Containers.Hashed_Maps;
with Ada.Containers; use Ada.Containers;
with Ada.Text_IO; use Ada.Text_IO;
procedure Main is
package Tiles is
-- Implementation is completely hidden
type Tile_Type is private;
type Tile_Set is tagged private;
type Tile_Key is private;
procedure Add (Collection : in out Tile_Set; Tile : Tile_Type);
function Get (Collection : in Tile_Set; Key : Natural) return Tile_Type;
function Make (Key : Natural; Data : Integer) return Tile_Type;
function Image (Tile : Tile_Type) return String;
private
type Tile_Key is record
X : Natural;
end record;
function Tile_Hash (K : Tile_Key) return Hash_Type is
(Hash_Type (K.X));
type Tile_Type is record
Key : Tile_Key;
Data : Integer;
end record;
package Tile_Matrix is new Ada.Containers.Hashed_Maps
(Element_Type => Tile_Type,
Key_Type => Tile_Key,
Hash => Tile_Hash,
Equivalent_Keys => "=");
use Tile_Matrix;
type Tile_Set is new Tile_Matrix.Map with null record;
end Tiles;
package body Tiles is
procedure Add (Collection : in out Tile_Set; Tile : Tile_Type) is
begin
Collection.Include (Key => Tile.Key, New_Item => Tile);
end Add;
function Get (Collection : in Tile_Set; Key : Natural) return Tile_Type is
K : Tile_Key := (X => Key);
C : Cursor := Collection.Find (Key => K);
begin -- For illustration, would need to handle missing keys
return Result : Tile_Type do
Result := Collection (C);
end return;
end Get;
function Image (Tile : Tile_Type) return String is
(Tile.Key.X'Image & '=' & Tile.Data'Image);
function Make (Key : Natural; Data : Integer) return Tile_Type is
New_Key : Tile_Key := (X => Key);
begin
return Result : Tile_Type do
Result.Key := New_Key;
Result.Data := Data;
end return;
end Make;
end Tiles;
use Tiles;
S : Tile_Set;
T : Tile_Type;
begin
S.Add (Make (Key => 1, Data => 10));
T := S.Get (1);
Put_Line (Image (T)); -- 1, 10
S.Add (Make (Key => 2, Data => 20));
T := S.Get (2);
Put_Line (Image (T)); -- 1, 20
for X in S loop -- Fails: cannot iterate over "Tile_Set"
-- +: to iterate directly over the elements of a container, write "of S"
-- but "for X of S" doesn't work either.
T := S (X); -- Fails: array type required in indexed component
-- presumably because X isn't a cursor?
Put_Line (Image (T));
end loop;
end;
It seems to me that the compiler has enough knowledge to iterate over a Tile_Set and I'm supposing it won't because I haven't exposed an iterator.
How should I modify this so that 'for X is S loop' is valid?
More generally, what is the idiom for hiding the implementation of underlying containers, whilst exposing indexing, iterating etc.?
It seems to me that the compiler has enough knowledge to iterate over a Tile_Set and I'm supposing it won't because I haven't exposed an iterator.
That assessment is correct. To be able to loop over a type, the type needs to define the aspects Default_Iterator and Iterator_Element, as described in LRM 5.5.1, and the aspect Constant_Indexing as described in LRM 4.1.6. Both sections read
These aspects are inherited by descendants of type T (including T'Class).
This means that since Tile_Set inherits from Tile_Matrix.Map, it does inherit these aspects which are defined on that map. However, since the inheritance relation is private, the aspects are not visible outside of that package.
You also cannot set them for the private type explicitly since 4.1.6 says
The aspects shall not be overridden, but the functions they denote may be.
Setting them on the private type would override the aspects inherited in the private part.
That leaves you with two options:
Make the inheritance relation public so that you get immediate access to all of the aspects.
Make Tile_Set encapsulate the Hashed_Map value, so that you can implement your own iteration on the type.
The second option would look like this:
type Cursor is private;
type Tile_Set is private
with Default_Iterator => Iterate,
Iterator_Element => Tile_Type,
Constant_Indexing => Constant_Reference;
function Has_Element (Position: Cursor) return Boolean;
package Tile_Set_Iterator_Interfaces is new
Ada.Iterator_Interfaces (Cursor, Has_Element);
type Constant_Reference_Type
(Element : not null access constant Tile_Type) is private
with Implicit_Dereference => Element;
function Iterate (Container: in Tile_Set) return
Tile_Set_Iterator_Interfaces.Forward_Iterator'Class;
function Constant_Reference (Container : aliased in Tile_Set;
Position : Cursor)
return Constant_Reference_Type;
private
-- ..
type Cursor is record
Data : Tile_Matrix.Cursor;
end record;
type Tile_Set is record
Data : Tile_Matrix.Map;
end record;
In the implementation of these subroutines, you can simply delegate to the Tile_Matrix subroutines.
The lesson is that you shouldn't inherit when your actual intent is composition.

apex_web_service.make_rest_request POST has bad json or returns ORA-06502 numeric or value error

I cannot get successful POST /authentication/login with E-verify/DHS using PLSQL using apex_web_service.make_rest_request(). I tried in Apex first but got numeric/value error and swtiched to PLSQL to debug. It uses json. Apex version is 4.2. I can get a GET to work, but that does not use headers or parms.
If specify p_parm_name using string_to_table() or using p_parm_name by specifying each array value separately, then I get {"status":400,"error":"There was a problem in the JSON you submitted: ActionDispatch::Http::Parameters::ParseError"}
If I specify p_body then i get an ORA-06502: PL/SQL: numeric or value error.
If I specify p_body but use wrong password, I get (what I think is) non-ascii response like the following:
If I use varchar2 instead of clob in various spots I get same errors
If I use postman with the same p_body then it works!!
So of course I'd love for this code to work, but in lieu of that, HOW DO I SEE THE REQUEST coming from Oracle/Apex so I can confirm what the json looks like (for #1 above)? Thanks!
Here is code.
l_parm_names apex_application_global.vc_arr2;
l_parm_values apex_application_global.vc_arr2;
l_resp_clob clob;
l_resp_length integer;
l_body_varchar2 varchar2(4000);
l_body_clob clob;
begin
l_parm_names(1) := 'username';
l_parm_values(1) := 'user1234';
l_parm_names(2) := 'password';
l_parm_values(2) := 'pass1234';
l_body_varchar2 := '{"username":"user1234","password":"pass1234"}';
l_body_clob := to_clob(l_body_varchar2);
apex_web_service.g_request_headers.delete();
apex_web_service.g_request_headers(1).name := 'Content-Type';
apex_web_service.g_request_headers(1).value := 'application/json';
l_resp_clob := apex_web_service.make_rest_request(
p_url => 'https://stage-everify*******login',
p_http_method => 'POST',
-- p_parm_name => apex_util.string_to_table('username:password'),
-- p_parm_value => apex_util.string_to_table('user1234:pass1234')
-- p_parm_name => l_parm_names,
-- p_parm_value => l_parm_values
p_body => l_body_clob
);
INSERT INTO ev_clob (body, resp, dte, note)
VALUES (l_body_clob, l_resp_clob, SYSDATE, 'Stack Script 1'); commit;
end;
You need to generate the json in this way
SET SERVEROUTPUT ON
DECLARE
l_cursor SYS_REFCURSOR;
BEGIN
OPEN l_cursor FOR
SELECT e.empno AS "employee_number",
e.ename AS "employee_name",
e.deptno AS "department_number"
FROM emp e
WHERE rownum <= 2;
APEX_JSON.initialize_clob_output;
APEX_JSON.open_object;
APEX_JSON.write('employees', l_cursor);
APEX_JSON.close_object;
DBMS_OUTPUT.put_line(APEX_JSON.get_clob_output);
APEX_JSON.free_output;
END;
/
Here you can find more information
https://docs.oracle.com/cd/E59726_01/doc.50/e39149/apex_json.htm#AEAPI29737
BEGIN
apex_json.open_object; -- {
apex_json.open_object('obj'); -- "obj": {
apex_json.write('obj-attr', 'value'); -- "obj-attr": "value"
apex_json.close_all; -- }}
END;

Removing attributes from oracle json object

Use case:
Application requires a subset of attributes, based on business rules
Example:
Some students do not require to enter in home address
Database : Oracle
Proposed implementation:
Build json object containing all possible attribute named pairs, then selectively remove specific named pairs
Issue:
Hoped to use native oracle function to remove the specified named pair.
e.g json_object.remove_attribute('home_address');
However Oracle do not appear to provide any such method.
Workaround : Convert json_object to VARCHAR2 string, and then use combination of INSTR and REPLACE to remove named pair.
Illustrative code:
DECLARE
CURSOR cur_student_json (p_s_ref IN VARCHAR2) IS
SELECT JSON_OBJECT(
,'s_surname' value s.s_surname
,'s_forename_1' value s.s_forename_1
,'s_home_address_1' value s.s_home_address_1
RETURNING VARCHAR2 ) student_json
FROM students s
WHERE s.s_ref = p_s_ref;
BEGIN
FOR x IN cur_student_json (p_s_ref) LOOP
vs_student_json:=x.student_json;
EXIT;
END LOOP;
-- Determine student type
vs_student_type:=get_student_type(p_s_ref);
-- Collect list of elements not required, based on student type
FOR x IN cur_json_inorout(vs_student_type) LOOP
-- Remove element from the json
vs_student_json:=json_remove(vs_student_json,x.attribute);
END LOOP;
END;
/
Question:
There must be an elegant method to achieve requirement
Classify under RTFM. Needs Oracle 12.2
DECLARE
-- Declare an object of type JSON_OBJECT_T
l_obj JSON_OBJECT_T;
-- Declare cursor to build json object
CURSOR cur_student_json (p_s_ref IN VARCHAR2) IS
SELECT JSON_OBJECT(
,'s_surname' value s.s_surname
,'s_forename_1' value s.s_forename_1
,'s_home_address_1' value s.s_home_address_1
) student_json
FROM students s
WHERE s.s_ref = p_s_ref;
BEGIN
-- Initialise object
l_obj := JSON_OBJECT_T();
-- Populate the object
FOR x IN cur_student_json (p_s_ref) LOOP
l_obj:=JSON_OBJECT_T.parse(x.student_json);
EXIT;
END LOOP;
-- Determine student type
vs_student_type:=get_student_type(p_s_ref);
-- Collect list of elements not required, based on student type
FOR x IN cur_json_inorout(vs_student_type) LOOP
-- Remove element from the json
l_obj.remove(x.attribute);
END LOOP;
-- Display modified object
dbms_output.put_line(l_obj.stringify);
END;
/

Why is Delphi (Zeos) giving me widestring fields in SQLite when I ask for unsigned big int?

I am using the latest Zeos with SQLite 3. It is generally going well, converting from MySQL, once we made all the persistent integer field TLargeInt.
But when we use a column definition unsigned big int (the only unsigned type allowed according to https://www.sqlite.org/datatype3.html), Delphi is calling the resulting field a ftWidestring.
No, it does not "revert" to string, SQlite just stores the data as it is provided.
As the documentation states:
SQLite supports the concept of "type affinity" on columns. The type affinity of a column is the recommended type for data stored in that column. The important idea here is that the type is recommended, not required. Any column can still store any type of data. It is just that some columns, given the choice, will prefer to use one storage class over another. The preferred storage class for a column is called its "affinity".
If you supplied/bind a text value, it would store a text value. There is no conversion to the type supplied in the CREATE TABLE statement, as it may appear in other more strict RBMS, e.g. MySQL.
So in your case, if you retrieve the data as ftWideString, I guess this is because you wrote the data as TEXT. For instance, the tool or program creating the SQLite3 content from your MySQL is writing this column as TEXT.
About numbers, there is no "signed"/"unsigned", nor precision check in SQLite3. So if you want to store "unsigned big int" values, just use INTEGER, which are Int64.
But, in all cases, even if SQLite3 API does support UNSIGNED 64 bit integers, this sqlite3_uint64 type may hardly be supported by the Zeos/ZDBC API or by Delphi (older versions of Delphi do NOT support UInt64). To be sure, you should better retrieve such values as TEXT, then convert it as UInt64 manually in your Delphi code.
Update:
Are you using the TDataSet descendant provided by Zeos? This component is tied to DB.Pas, so expects a single per-column type. It may be the source of confusion of your code (which you did not show at all, so it is hard to figure out what's happening).
You should better use the lower level ZDBC interface, which allows to retrieve the column type for each row, and call the value getter method as you need.
Zeos uses the following code (in ZDbcSqLiteUtils.pas) to determine a column's type:
Result := stString;
...
if StartsWith(TypeName, 'BOOL') then
Result := stBoolean
else if TypeName = 'TINYINT' then
Result := stShort
else if TypeName = 'SMALLINT' then
Result := stShort
else if TypeName = 'MEDIUMINT' then
Result := stInteger
else if TypeName = {$IFDEF UNICODE}RawByteString{$ENDIF}('INTEGER') then
Result := stLong //http://www.sqlite.org/autoinc.html
else if StartsWith(TypeName, {$IFDEF UNICODE}RawByteString{$ENDIF}('INT')) then
Result := stInteger
else if TypeName = 'BIGINT' then
Result := stLong
else if StartsWith(TypeName, 'REAL') then
Result := stDouble
else if StartsWith(TypeName, 'FLOAT') then
Result := stDouble
else if (TypeName = 'NUMERIC') or (TypeName = 'DECIMAL')
or (TypeName = 'NUMBER') then
begin
{ if Decimals = 0 then
Result := stInteger
else} Result := stDouble;
end
else if StartsWith(TypeName, 'DOUB') then
Result := stDouble
else if TypeName = 'MONEY' then
Result := stBigDecimal
else if StartsWith(TypeName, 'CHAR') then
Result := stString
else if TypeName = 'VARCHAR' then
Result := stString
else if TypeName = 'VARBINARY' then
Result := stBytes
else if TypeName = 'BINARY' then
Result := stBytes
else if TypeName = 'DATE' then
Result := stDate
else if TypeName = 'TIME' then
Result := stTime
else if TypeName = 'TIMESTAMP' then
Result := stTimestamp
else if TypeName = 'DATETIME' then
Result := stTimestamp
else if Pos('BLOB', TypeName) > 0 then
Result := stBinaryStream
else if Pos('CLOB', TypeName) > 0 then
Result := stAsciiStream
else if Pos('TEXT', TypeName) > 0 then
Result := stAsciiStream;
If your table uses any other type name, or if the SELECT output column is not a table column, then Zeos falls back to stString.
There's nothing you can do about that; you'd have to read the values from the string field (and hope that the conversion to string and back does not lose any information).
It might be a better idea to use some other library that does not assume that every database has fixed column types.
The latest Zeos, that is, which one?
See if it's the same in 7.2 svn 3642:
http://svn.code.sf.net/p/zeoslib/code-0/branches/testing-7.2/
Michal