Exporting Oracle APEX regions with DBMS_LOB - html

I'm interested in exporting the contents of a page generated by Oracle-APEX to an external file while preserving as much formatting as possible. Eventually, I'd like to export it to either a .doc, .xls., or .pdf format. For now, I'm testing with a .doc file.
Currently, I'm attempting to do this by creating a PL/SQL anonymous block "Process" that executes when an "Export" button is pressed. Based off an example I found online, if I use the following code in the process, I can output one of the items in my page to a .doc file:
DECLARE
test_blob BLOB;
BEGIN
dbms_lob.createtemporary(test_blob, FALSE);
dbms_lob.open(test_blob, dbms_lob.lob_readwrite);
DBMS_LOB.APPEND(test_blob, UTL_RAW.CAST_TO_RAW(:P4016_ITEM_NAME));
OWA_UTIL.mime_header('application/doc', FALSE);
HTP.p('Content-Length: ' || DBMS_LOB.getlength(test_blob));
htp.p('Content-Disposition: attachment; filename= "text.doc"');
OWA_UTIL.http_header_close;
WPG_DOCLOAD.download_file(test_blob);
dbms_lob.close(test_blob);
END;
However, I would like to output some regions on my page that include tables, which are not considered items, as far as I know (I'm still very new to APEX). If I include the table name in the DMS_LOB.APPEND line, I receive an error message. Does anyone know of a simple way to reference these regions?
The only workaround I've found is to replicate the page in my exported file by enclosing the results of the SQL queries used to populate the tables in HTML based off the HTML of my APEX Page. In other words, if I wanted to italicize something, I would do the following:
...
dbms_lob.append(test_blob, UTL_RAW.CAST_TO_RAW('<html><i>'));
dbms_lob.append(test_blob, [PARSED SQL QUERY]);
dbms_lob.append(test_blob, UTL_RAW.CAST_TO_RAW('</i></html>'));
If anyone knows of a simpler way to do this, preferably involving a simple reference to my page regions, I would greatly appreciate it.

There is a way (3 ways) to get PDF - http://www.oracle.com/technetwork/developer-tools/apex/learnmore/custom-pdf-reports-1953918.pdf.
Also you can use Interactive report, it has export option. But it can be exported only in HTML by default.
It is not exactly what you ask, but you don't need to write code.

Related

Mediawiki dumpBackup parameters

I fail to understand some options in the dumpBackup.php maintenance script of Mediawiki.
What is the effect of --include-files? In my test wiki, dumpBackup.php --current --include-files and dumpBackup.php --current both contain the pages of the File: namespace and I see no difference.
What is the effect of --uploads? In my test wiki I see that the xml file contains a tiny bit more of xml but, to me, it looks like this is all information which is there already as part of the File: page. What is the use of this flag?
When I add both --include-files and --uploads I get the next surprise. I actually expected the combined effect of both options, but what I get is the file content of the uploaded files and the upload record. Why did I not get the file contents when I used --include-files alone?
When I use only --include-files and --uploads but no --current I would have expected to get the content of the uploaded files and the upload record (and none of the other pages). However ,I get the warning no valid action specified and no further information at all
I am completely confused since I do not understand the logic behind all of this.

How to add dynamic data in an HTML document via a Power Automate flow?

I'm working on a flow to take data from a SharePoint list, and add it to a specific point in HTML. I am just saving the outputted HTML file to my OneDrive whilst testing.
I have found that when pasting my HTML code into a compose block, then outputting this, it works fine and I'm left with a normal looking HTML page when opened in a browser.
However, as soon as I add dynamic data in place of certain HTML elements it all seems to go wrong.
Firstly, the outputted HTML file now contains '/n' in place of every line break. I have also noticed that the outputted HTML code has now been changed to an array.
I've attached screenshots of my flow below.
Get Items:
Compose:
Create File:
Create File Peek Code:
Compose Output:
Create Output:
Create Output Raw:
Outputted HTML File:
Are you processing the output from that Compose action?
What you are doing is correct and it would not matter if you are using a single or multiple lines of text.
It seems to me that you are using the output of the compose somewhere else or maybe the file content box in the OneDrive action has an expression?
Please check the following:
The output of the compose should look normally
Make sure your OneDrive's action parameters look like this when you "Peek code" (Click on the action's ellipses). If it looks different, you'll have to track the variable/output you are using or review the expression.
I have replicated what you are doing and it is working fine, see attached images.
Edit mode:
Run:
Compose action
OneDrive action
Created File in OneDrive:
EDIT: Update response to use the result from SharePoint in two different ways.
Use the OneDrive action inside the 'Apply to each' if there is only 1 result, it will run once anyway.
You can omit the 'Apply to each' action using the expression box to set the values in the Compose, this is the syntax you can use.
outputs('Get_items')?['body']?['value']?[rowIndex]?['ColumnName']
In case there are several rows in the result, you'll have to validate which is the one you need.

VBA to create a form and load an attached image to it

Client has asked me to create a self contained tool in MS Access, versions 2007 and 2016. It needs to be self contained because it will be copied to and from various laptops at various times. The tool may not create, delete, or modify any file except the accdb database itself. When the tool is in use, the user is unlikely to have network or internet access.
One of the criteria is the creation of new forms each time it is run. I realize that Access is meant to have all the forms and their controls already built before deployment, but client doesn't want that. I have solved that problem, creating x number of forms upon certain conditions, and creating 30-40 controls on each form based upon certain conditions, each with their own events, etc.
Now, how do I load his logo into a control on each form? Remember, the accdb must be self contained, so I can't count on the logo being in a certain directory or even on the machine in use, and I can't write it to the file system myself.
I can and have loaded the logo (jpeg) into one of my tables in an attachment field. It will be the only attachment in that field. It would be just as easy for it to be its own table, if that helps.
I can create attachment controls with VBA, but I don't know how to set the ControlSource to the FileData inside the attachment with VBA.
I also have had poor success attempting to embed the picture in an image control in a hidden form and setting the .picture property to the image name. It only seems to be working on my machine.
So, how do I display an attached jpeg on a newly created form?
Just asked and answered in SO access-vba. Here's one solution.
Saving Image as OLE Object, in Access
Many others on google and SO search
EDIT: You must read the whole question to see the author's answer
Answer:
So, what I ended up doing was following this
https://support.microsoft.com/en-us/kb/210486
I use the readBLOB function to read the file and save it into the
database. Then, when I run a report or open a form that has the
picture, onload, I use the WriteBlob function to write the file to a
temp folder and then use that path to populate an Image object.

Scan an area of a web page's source code for changes while reporting it?

this is one heck of a confusing question to ask so here it goes. Firstly, I'm not asking you to write me any code I just need help going in the right direction for what I'm trying to achieve here. Basically the task is this, I want to scan a select area of a web page's source code for changes and if something does change, I want to report it somewhere (like a console or something). However, I do not want just a notification of change, I also want what the change is/was. I've been looking into things like jsoup but I am still struggling to even find out what this is called.
Any pointers would be insanely appreciated. Thanks, Optimistic.
Here are some steps assuming this is from a node.js project:
Get the URL for the specific script file you're looking for a change in.
Using the request() module, fetch that URL.
Break the data up into lines (probably using .split()).
Find the specific line you are looking for either by counting line numbers of by searching for some representative text in that line.
Using some sort of search in that line (perhaps a regex), find the current value of the exact item in that line you are looking for.
Save the current value.
Then, at some future time, repeat this whole process and compare what you find to the previous value.
If this is being done from a browser instead of node.js, then use an Ajax call to retrieve the file. If the file is on another domain from your web page and that domain does not permit cross-origin requests, then you cannot solve this problem in an automated fashion from a browser in your own web page.
Here is how I would do it with Jsoup:
Document doc = Jsoup.connect(url).get();
String scriptCssQuery = "script"; // Tune this CSS query to find THE script you need.
Element script = doc.select(scriptCssQuery).first();
if (script != null) {
String scriptLines = script.html();
// Store the changing line somewhere and compare it to its previous value...
}

Extract HTML Tables from Multiple webpages Using R

Hi I have done thorough research and have come to this extent. All I am trying to do is extract HTML table spanning many webpages.
I have to query the website sec.gov's database and the table then returns appropriate number of results (the size and number of pages vary with every query). For example:
Link: http://www.sec.gov/cgi-bin/srch-edgar
Inputs to be given:
Enter a Search string box: form-type=(8-k) AND filing-date=20140523
Start: 2014
End: 2014
How can I do this totally in R without even opening the browser?
I am sharing what all I have done
I tried many packages and the closest I came to was package RCurl. But in getURL function I opened the browser, ran the query in browser and pasted it in getURL. It returned a very long character, which has the URLs that can be looped and produce the output I want. All this information is in the "center" tag of output.
Now I do not know how to get those URLs out from the middle of the character.
Also, this is not what I wanted. I wanted to run a web query directly from R and get the varied HTML table outputs directly into R. is this possible at all?
Thanks
Meena
Yes, it is possible. You will want to use a combination of the RCurl and XML packages. You will need to programmatically generate the query parameters in the URL (based on the HTML form) and then use getURL() or getURLContent(). Sometimes, the server will expect an HTTP POST, so there is postForm().
To parse the result, look up the XPath language, which the XML package supports with getNodeSet(). I think there is also a function in the XML package for parsing an HTML table into a data.frame.
You might want to invest in this book.