Accessing ArcGIS Pro geoprocessing history programmatically - gis

I am writing an ArcGIS Pro Add-In and would like to view items in the geoprocessing history programmatically. The goal of this would be to get the list of parameters and tools used, to be able to better understand and recreate a workflow later, and perhaps, in another project where we would not have direct access to the history within ArcGIS Pro.
After a lot of searching through documentation, online posts, and debugging breakpoints in my code, I've found that some of this data does exist privately within the HistoryProjectItem class, but since this is a private class member, within a sealed class it seems that there would be nothing I can do to access this data. The other place I've seen this data is less than ideal, with the user having an option to write the geoprocessing history to an XML log file that lives within /AppData/Roaming/ESRI/ArcGISPro/ArcToolbox/History. Our team has been told that this file may be a problem because certain recursive operations may cause the file to balloon out of control, and after reading online, it seems that most people want this setting disabled to avoid large log files taking up space on their machine. Overall the log file doesn't seem like a great option as we fear it could slow down a user by having the program write large log files while they are working.
I was wondering if this data is stored somewhere that I have missed that could be accessed programmatically from the add-in. It seems to me that the data within Project.Items is always stored regardless of user settings but appears to be inaccessible this way to due class member visibility. I'm unfamiliar with geodatabases and ArcGIS file formats to know if a project will always have a .gdb which perhaps we could read the history from there.
Any insights on how to better read the Geoprocessing history in a minimally intrusive way to the user would be ideal. Is this data available elsewhere?

This was the closest/best solution I have found so far without writing to the history logs that most people avoid due to filesize bloat, and warnings that one operation may run other operations recursively causing the file to balloon massively.
https://community.esri.com/t5/arcgis-pro-sdk-questions/can-you-access-geoprocessing-history-programmatically-using-the/m-p/1007833#M5842
it involves reading the .arpx file (which is written to on save) by unzipping it, parsing the XML, and filtering the contents to only GPHistoryOperations. From there I was able to read all the parameters, environment options, status, and duration of the operation that I was hoping to gain.
public static void ListHistory()
{
// this can be run in a console app (or within a Pro add-in)
CIMGISProject project = GetProject(#"D:\tests\topologies\topotest1.aprx");
foreach(CIMProjectItem hist in project.ProjectItems
.Where(itm => itm.ItemType == "GPHistory"))
{
Debug.Print($"+++++++++++++++++++++++++++");
Debug.Print($"{hist.Name}");
XmlDocument doc = new XmlDocument();
doc.LoadXml(hist.PropertiesXML);
//it sure would be nice if Pro SDK had things like MdProcess class in ArcObjects
//https://desktop.arcgis.com/en/arcobjects/latest/net/webframe.htm#MdProcess.htm
var json = JsonConvert.SerializeXmlNode(doc, Newtonsoft.Json.Formatting.Indented);
Debug.Print(json);
}
}
static CIMGISProject GetProject(string aprxPath)
{
//aprx files are actually zip files
//https://www.nuget.org/packages/SharpZipLib
using (var zipFile = new ZipFile(aprxPath))
{
var entry = zipFile.GetEntry("GISProject.xml");
using (var stream = zipFile.GetInputStream(entry))
{
using (StreamReader reader = new StreamReader(stream))
{
var xml = reader.ReadToEnd();
//deserialize the xml from the aprx file to hydrate a CIMGISProject
return ArcGIS.Core.CIM.CIMGISProject.FromXml(xml);
}
};
};
}
Code provided by Kirk Kuykendall

Related

Am I structuring my Razor app "correctly"?

I'm building my first razor app and am wondering if I'm organizing my files in a good way (one that will be clear if other people were to look at my code, and one that will not inhibit performance). My main concern is with library-like functions that I plan to use throughout the app. For those I've been storing string formatting functions and the like in the "App_Code" folder--for example I have the below function stored in a class there:
public static decimal? ToDecimal(object? val) => (val is DBNull) ? (decimal?)null : Convert.ToDecimal(val);
Additionally I plan to user a "Helpers" folder to store any HTML that will need to be repeated throughout the app. Are these the correct places to put this kind of stuff?
And then my bigger question is for data retrieval/storage. Say I have a "Location" class which is tied to a SQL table. If I were using EF, to retrieve and store data is one command, but in my case I'm writing my own functions to get the data. Right now I have all that code on the Index page for the Location (retrieving all locations and the various linked files), but now I need to re-use a lot of those functions on the edit & view pages for a single location. Where is the right place to store those general functions (e.g. GetLocation(id), GetAllLocations, etc.)? Would it make sense to just make a random "Library" folder and put them in a class there, having a dedicated file for each class I need to interact with? Just throw it in App_Code?
A good example of the last one is this function, where I retrieve all States from the database. I expect to need to do this on multiple pages--where should I store it?
private void GetStates()
{
int id;
string DbQuery = "SELECT StateId,Abbreviation,StateName " +
"FROM [dbo].[State] " +
"WHERE RowStatus<>0 ";
States = new Dictionary<int, State>();
SqlCommand DbCommand = new SqlCommand(DbQuery, DbConnection);
SqlDataReader dataReader = DbCommand.ExecuteReader();
while (dataReader.Read())
{
id = Convert.ToInt32(dataReader[0]);
States.Add(id, new State()
{
StateId = id,
Abbreviation = Convert.ToString(dataReader[1]),
StateName = Convert.ToString(dataReader[2])
});
}
dataReader.Close();
DbCommand.Dispose();
}
If you have an App_Code folder, you have an ASP.NET Web Pages app, not a Razor Pages app, as this question has been tagged. Web Pages is an old framework that hasn't seen any updates for years and is based on the old .NET Framework Website Project type as opposed to the Web Application project type which requires compilation before deployment.
All C# code has to go in the App_Code folder or in separate class libraries which have to be manually added to the bin folder. In short ASP.NET Web Pages is not a good choice if you are concerned about code organisation (or much else if you are also concerned about performance, cross platform deployment, framework updates, extensibility, testing etc). You should look at Razor Pages instead: https://www.learnrazorpages.com/razor-pages/tutorial/bakery

Questions on extending GAS spreadsheet usefulness

I would like to offer the opportunity to view output from the same data, in a spreadsheet, TBA sidebar and, ideally another type of HTML window for output created, for example, with a JavaScript Library like THREE.
The non Google version I made is a web page with iframes that can be resized, dragged and opened/closed and, most importantly, their content shares the same record object in the top window. So, I believe, perhaps naively, something similar could be made an option inside this established and popular application.
At the very least, the TBA trial has shown me it useful to view and manipulate information from either sheet or TBA. The facility to navigate large building projects, clone rooms and floors, and combine JSON records (stored in depositories like myjson) for collaborative work is particularly inspiring for me.
I have tried using the sidebar for different HTML files, but the fact only one stays open is not very useful, and frankly, sharing record objects is still beyond me. So that is the main question. Whether Google people would consider an extra window type is probably a bit ambitious, but I think worth asking.
You can't maintain a global variable across calls to HtmlService. When you fire off an HtmlService instance, which runs in the browser, the server side code that launched it exits.
From that point control is client side, in the HtmlService code. If you then launch a server side function (using google.script.run from client side), a new instance of the server side script is launched, with no memory of the previous instance - which means that any global variables are re-initialized.
There are a number of techniques for peristing values across calls.
The simplest one of course is to pass it to the htmlservice in the first place, then to pass it back to server side as an argument to google.script.run.
Another is to use property service to hold your values, and they will still be there when you go back, but there is a 9k maximum entry size
If you need more space, then the cache service can hold 100k in a single entry and you can use that in the same way (although there is a slight chance it will be cleaned away -- although it's never happened for me)
If you need even more space, there are techniques for compressing and/or spreading a single object across several cache entries - as documented here http://ramblings.mcpher.com/Home/excelquirks/gassnips/squuezer. This same method supports Google Drive, or Google cloud storage if you need to persist data even longer
Of course you can't pass non-stringifiable objects like functions and so on, but you can postpone their evaluation and allow the initialized server side script to evaulate them, and even share the same code between server, client or across projects.
Some techniques for that are described in these articles
http://ramblings.mcpher.com/Home/excelquirks/gassnips/nonstringify
http://ramblings.mcpher.com/Home/excelquirks/gassnips/htmltemplateresuse
However in your specific example, it seems that the global data you want is fetched from an external api call. Why not just retrieve it client side in any case ? If you need to do something with it server side, then pass it to the server using google.script.run.
window.open and window.postMessage() solved both the problems I described above.
I hope you will be assured from the screenshot and code that the usefulness of Google sheets can be extended for the common good. At the core is the two methods for inputting, copying and reviewing textual data - spreadsheet for a slice through a set of data, and TBA for navigation of associations in the Trail (x axis) and Branches (y axis), and for working on Aspects (z axis) of the current selection that require attention, in collaborations, from different interests.
So, for example, a nurse would find TBA useful for recording many aspects of an examination of a patient, whereas a pharmacist might find a spreadsheet more useful for stock control. Both record their data in a common object I call 'nset' (hierarchy of named sets), saved in the cloud and available for distribution in collaborative activities.
TBA is also useful for cloning large sets of records. For example, one room, complete with furniture can be replicated on one floor, then that floor, complete with rooms can be replicated for a complete tower.
Being able to maintain parallel nset objects in multiple monitor windows by postMessage means unrivalled opportunities to display the same data in different forms of multimedia, including interactive animation, augmented reality, CNC machine instruction, IOT controls ...
Here is the related code:
From the TBA in sidebar:
window.addEventListener("message", receiveMessage, false);
function openMonitor(nset){
var params = [
'height=400',
'width=400'
].join(',');
let file = 'http://glasier.hk/blazer/model.html';
popup = window.open(file,'popup_window', params);
popup.moveTo(100,100);
}
var popup;
function receiveMessage(event) {
let ed,nb;
ed = event.data;
nb = typeof ed === "string"? ed : nb[0];
switch(nb){
case "Post":
console.log("Post");
popup.postMessage(["Refreshing nset",nset], "http:glasier.hk");
break;
}
}
function importNset(){
google.script.run
.withSuccessHandler(function (code) {
root = '1grsin';
trial = 'msm4r';
orig = 'ozs29';
code = orig;
path = "https://api.myjson.com/bins/"+code;
$.get(path)
.done((data, textStatus, jqXHR) => {
nset = data;
openMonitor(nset);
cfig = nset.cfig;
start();
})
})
.sendCode();
}
From the popup window:
$(document).ready(function(){
name = $(window).attr("name");
if(name === "workshop"){
tgt = opener.location.href;
}
else{
tgt = "https://n-rxnikgfd6bqtnglngjmbaz3j2p7cbcqce3dihry-0lu-script.googleusercontent.com"
}
$("#notice").html(tgt);
opener.postMessage("Post",tgt);
$(window).on("resize",function(){
location.reload();
})
})
}
window.addEventListener("message", receiveMessage, false);
function receiveMessage(event) {
let ed,nb;
ed = event.data;
nb = typeof ed === "string"? ed : ed[0];
switch(nb){
case "Post": popup.postMessage(["nset" +nset], "*"); break;
default :
src = event.origin;
notice = [ed[0]," from ",src ];
console.log(notice);
// $("#notice").html(notice).show();
nset = ed[1];
cfig = nset.cfig;
reloader(src);
}
}
I should explain that the html part of the sidebar was built on a localhost workshop, with all styles and scripts compiled into a single file for pasting in a sidebar html file. The workshop also is available online. The Google target is provided by event.origin in postMessage. This would have to be issued to anyone wishing to make different monitors. For now I have just made the 3D modelling monitor with Three.js.
I think, after much research and questioning around here, this should be the proper answer.
The best way to implement global variables in GAS is through userproperties or script properties.https://developers.google.com/apps-script/reference/properties/properties-service. If you'd rather deal with just one, write them to an object and then json.stringify the object (and json.parse to get it back).

How to design an app that does heavy tasks and show the result in the frontend (ex Google Search Console)

Let's imagine this:
I have to download an XML document from an URL;
I have to elaborate this document and persist its information in the database, creating or updating a lot of entites.
I think the best way is to use queues. Or maybe I can also use cronjobs.
My problem is this: if I use the same app to do the heavy tasks and also to show to the end user the results of those heavy tasks, it may happen that the heavy tasks slow down the main website.
Take a more concrete example from real life: Google Search Console (or whatever other app that does heavy tasks and shows results to the end user).
Google Search Console gets the XML map, then starts downloading each webpage, on each webpage performs a lot of analysis, then saves the results to a database and so the end user can see the errors of his website and other useful information.
So, put as hypothesis I want to build again Google Search Console as a Symfony app, which are the possible approaches?
I think that for sure I have to use queues, but the app that downloads the webpages and the app that processes these webpages and the public frontend that shows the result of these operations are the same app or are two or three separate apps?
That is, have I to create a unique application that does all these things, or I create an app to download webpages, one other to process these webpages and one other to show to the user the results?
I'm thinking a lot at this, and I'm not able to find a good design to follow.
Because my istinct is to create multiple apps for each of those tasks to make the workings but it seems that create multiple Symfony apps isn't a good choice: Symfony 2 multiple apps?
So I really don't know which path to follow: multiple apps or one big app? And if I use one big app, should have I to use cronjobs or I have anyway use queues?
I will first detail the architecture :
You can do an Service like this :
namespace SomeBundle\Utils;
use SomeBundle\Entity\MyEntity;
use SomeBundle\myException;
class SomeService
{
private $var;
public function __construct()
{
}
public function doStuff()
{
// Do stuff
}
}
To debug you service, you can call it from a testController or basic Controller.
You can make Command like this :
namespace SomeBundle\Command;
use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use SomeBundle\Utils\SomeService;
class SomeCommand extends ContainerAwareCommand
{
protected function configure()
{
$this->setName('some:command')
->setDescription('My awesome command');
}
protected function execute(InputInterface $input, OutputInterface $output)
{
$container = $this->getContainer();
$sevice = new SomeService($container);
$results = $service->doStuff();
}
}
You can do an Command Application like this :
require __DIR__.'/vendor/autoload.php';
require_once __DIR__.'/app/AppKernel.php';
$kernel = new AppKernel('dev', true);
use SomeBundle\Command\SomeCommand;
use Symfony\Bundle\FrameworkBundle\Console\Application;
$application = new Application($kernel);
$application->add(new SomeCommand());
$application->run();
Hope this helps !

How to bulk insert rows into realm database from web (JSON) in Xamarin

This is my very first attempt to work with Xamarin studio with Realm (to make app for both iOS and Android) and I am stuck at this situation since last 24 hours.
My online database-table has 30,000 rows. Earlier when I used to work in Android studio, I used to import those rows in app's 1st run with the help of JSON, GSON and insert into SQLite db.
But I am unable to do so in Realm & Xamarin. I know, I have not provided any code snippet (my effort), but honestly even after searching a lot about this, I couldn't find how should I proceed?
I've already answered that in the Github issue, but in case someone else stumbles across it, the best way to do that without blocking the UI thread, is to use the Realm.WriteAsync API. Basically, you'll do something like:
var items = await service.GetAllItems();
// I assume items are already deserialized RealmObject-s
var realm = Realm.GetInstance();
await realm.WriteAsync(r =>
{
foreach (var item in items)
{
r.Manage(item);
}
}
/* Data is loaded, show message or process it in other ways */
One thing to note is that within the WriteAsync lambda, we're using the r instance and not the original realm one. The reason is that because realms are not thread safe and the asynchronous write will happen on another thread, so it implicitly creates another instance and passes it as an argument of the action parameter.

Trying to get Google Drive to work with PCL Xamarin Forms application

I’m using Xamarin Forms to do some cross platform applications and I’d like to offer DropBox and GoogleDrive as places where users can do backups, cross platform data sharing and the like. I was able to get DropBox working without doing platform specific shenanagins just fine, but Google Drive is really giving me fits. I have my app setup properly with Google and have tested it with a regular CLI .NET application using their examples that read the JSON file off the drive and create a temporary credentials file – all fine and well but getting that to fly without access to the file system is proving elusive and I can’t find any examples on how to go about it.
I’m currently just using Auth0 as a gateway to allow users to provide creds/access to my app for their account which works dandy, the proper scope items are requested (I’m just using read only file access for testing) – I get an bearer token and refresh token from them – however when trying to actually use that data and just do a simple file listing, I get a 400 bad request error.
I’m sure this must be possible but I can’t find any examples anywhere that deviate from the slightest of using the JSON file downloaded from Google and creating a credentials file – surely you can create an instance of the DriveService object armed with only the bearer token...
Anyway – here’s a chunk of test code I’m trying to get the driveService object configured – if anyone has done this or has suggestions as to what to try here I’d very much appreciate your thoughts.
public bool AuthenticationTest(string pBearerToken)
{
try
{
var oInit = new BaseClientService.Initializer
{
ApplicationName = "MyApp",
ApiKey = pBearerToken,
};
_googleDrive = new DriveService(oInit);
FilesResource.ListRequest listRequest = _googleDrive.Files.List();
listRequest.PageSize = 10;
listRequest.Fields = "nextPageToken, files(id, name)";
//All is well till this call to list the files…
IList<Google.Apis.Drive.v3.Data.File> files = listRequest.Execute().Files;
foreach (var file in files)
{
Debug. WriteLine(file.Name);
}
}
catch (Exception ex)
{
RaiseError(ex);
}
}