Am I structuring my Razor app "correctly"? - razor

I'm building my first razor app and am wondering if I'm organizing my files in a good way (one that will be clear if other people were to look at my code, and one that will not inhibit performance). My main concern is with library-like functions that I plan to use throughout the app. For those I've been storing string formatting functions and the like in the "App_Code" folder--for example I have the below function stored in a class there:
public static decimal? ToDecimal(object? val) => (val is DBNull) ? (decimal?)null : Convert.ToDecimal(val);
Additionally I plan to user a "Helpers" folder to store any HTML that will need to be repeated throughout the app. Are these the correct places to put this kind of stuff?
And then my bigger question is for data retrieval/storage. Say I have a "Location" class which is tied to a SQL table. If I were using EF, to retrieve and store data is one command, but in my case I'm writing my own functions to get the data. Right now I have all that code on the Index page for the Location (retrieving all locations and the various linked files), but now I need to re-use a lot of those functions on the edit & view pages for a single location. Where is the right place to store those general functions (e.g. GetLocation(id), GetAllLocations, etc.)? Would it make sense to just make a random "Library" folder and put them in a class there, having a dedicated file for each class I need to interact with? Just throw it in App_Code?
A good example of the last one is this function, where I retrieve all States from the database. I expect to need to do this on multiple pages--where should I store it?
private void GetStates()
{
int id;
string DbQuery = "SELECT StateId,Abbreviation,StateName " +
"FROM [dbo].[State] " +
"WHERE RowStatus<>0 ";
States = new Dictionary<int, State>();
SqlCommand DbCommand = new SqlCommand(DbQuery, DbConnection);
SqlDataReader dataReader = DbCommand.ExecuteReader();
while (dataReader.Read())
{
id = Convert.ToInt32(dataReader[0]);
States.Add(id, new State()
{
StateId = id,
Abbreviation = Convert.ToString(dataReader[1]),
StateName = Convert.ToString(dataReader[2])
});
}
dataReader.Close();
DbCommand.Dispose();
}

If you have an App_Code folder, you have an ASP.NET Web Pages app, not a Razor Pages app, as this question has been tagged. Web Pages is an old framework that hasn't seen any updates for years and is based on the old .NET Framework Website Project type as opposed to the Web Application project type which requires compilation before deployment.
All C# code has to go in the App_Code folder or in separate class libraries which have to be manually added to the bin folder. In short ASP.NET Web Pages is not a good choice if you are concerned about code organisation (or much else if you are also concerned about performance, cross platform deployment, framework updates, extensibility, testing etc). You should look at Razor Pages instead: https://www.learnrazorpages.com/razor-pages/tutorial/bakery

Related

Document.Create.NewFamilyInstance not working generate in Design Automation For Revit

I want to generate scaffolding automatically by Design Automation but i have a trouble in Create.NewFamilyInstance.
So i'm using BuiltInCategory.OST_SpecialityEquipment into get FilteredElementCollector collector
FilteredElementCollector elementCollector = (new FilteredElementCollector(doc));
// Get Speciality Equipment
FilteredElementCollector col = elementCollector.OfCategory(BuiltInCategory.OST_SpecialityEquipment).OfClass(typeof(FamilySymbol));
And get FamilySymbol SpecialityEquipment to element Id
ScaffoldInfor scaffoldInfor = new ScaffoldInfor
{
Symbol = doc.GetElement(scaffoldId) as FamilySymbol
};
get the currently defined XYZ coordinates and use NewFamilyInstance(XYZ, FamilySymbol, StructuralType) to insert a new instance of a family into the document
FamilyInstance instance = doc.Create.NewFamilyInstance(currentPosition, scaffoldInfor.Symbol, StructuralType.NonStructural);
So when i started debug DesignAutomationFramework in local , it worked and generated scaffolding. But when i pushed on Design Automation API, it run success but not generate scaffolding.
Here is WorkItem ID : 97971783a94a482cb1c210f36b65ca86
Question : Why is it not working with Design Automation API but still working properly on local ?
Thanks!
after a closed look, the displaying issue I mentioned above is because the generated scaffolding's Phase Created is DIFFERENT from View's Phase. In the source code of addin, you can just simply add instance.CreatedPhaseId = selectedWall.CreatedPhaseId; after doc.Create.NewFamilyInstance..., so the generated scaffolding's Phase Created will be the same one as the hosted Wall. With this change, you will see the generated scaffolding in the result.rvt. Please let me know if any further questions or you want me to share the source code.

Accessing ArcGIS Pro geoprocessing history programmatically

I am writing an ArcGIS Pro Add-In and would like to view items in the geoprocessing history programmatically. The goal of this would be to get the list of parameters and tools used, to be able to better understand and recreate a workflow later, and perhaps, in another project where we would not have direct access to the history within ArcGIS Pro.
After a lot of searching through documentation, online posts, and debugging breakpoints in my code, I've found that some of this data does exist privately within the HistoryProjectItem class, but since this is a private class member, within a sealed class it seems that there would be nothing I can do to access this data. The other place I've seen this data is less than ideal, with the user having an option to write the geoprocessing history to an XML log file that lives within /AppData/Roaming/ESRI/ArcGISPro/ArcToolbox/History. Our team has been told that this file may be a problem because certain recursive operations may cause the file to balloon out of control, and after reading online, it seems that most people want this setting disabled to avoid large log files taking up space on their machine. Overall the log file doesn't seem like a great option as we fear it could slow down a user by having the program write large log files while they are working.
I was wondering if this data is stored somewhere that I have missed that could be accessed programmatically from the add-in. It seems to me that the data within Project.Items is always stored regardless of user settings but appears to be inaccessible this way to due class member visibility. I'm unfamiliar with geodatabases and ArcGIS file formats to know if a project will always have a .gdb which perhaps we could read the history from there.
Any insights on how to better read the Geoprocessing history in a minimally intrusive way to the user would be ideal. Is this data available elsewhere?
This was the closest/best solution I have found so far without writing to the history logs that most people avoid due to filesize bloat, and warnings that one operation may run other operations recursively causing the file to balloon massively.
https://community.esri.com/t5/arcgis-pro-sdk-questions/can-you-access-geoprocessing-history-programmatically-using-the/m-p/1007833#M5842
it involves reading the .arpx file (which is written to on save) by unzipping it, parsing the XML, and filtering the contents to only GPHistoryOperations. From there I was able to read all the parameters, environment options, status, and duration of the operation that I was hoping to gain.
public static void ListHistory()
{
// this can be run in a console app (or within a Pro add-in)
CIMGISProject project = GetProject(#"D:\tests\topologies\topotest1.aprx");
foreach(CIMProjectItem hist in project.ProjectItems
.Where(itm => itm.ItemType == "GPHistory"))
{
Debug.Print($"+++++++++++++++++++++++++++");
Debug.Print($"{hist.Name}");
XmlDocument doc = new XmlDocument();
doc.LoadXml(hist.PropertiesXML);
//it sure would be nice if Pro SDK had things like MdProcess class in ArcObjects
//https://desktop.arcgis.com/en/arcobjects/latest/net/webframe.htm#MdProcess.htm
var json = JsonConvert.SerializeXmlNode(doc, Newtonsoft.Json.Formatting.Indented);
Debug.Print(json);
}
}
static CIMGISProject GetProject(string aprxPath)
{
//aprx files are actually zip files
//https://www.nuget.org/packages/SharpZipLib
using (var zipFile = new ZipFile(aprxPath))
{
var entry = zipFile.GetEntry("GISProject.xml");
using (var stream = zipFile.GetInputStream(entry))
{
using (StreamReader reader = new StreamReader(stream))
{
var xml = reader.ReadToEnd();
//deserialize the xml from the aprx file to hydrate a CIMGISProject
return ArcGIS.Core.CIM.CIMGISProject.FromXml(xml);
}
};
};
}
Code provided by Kirk Kuykendall

How to design an app that does heavy tasks and show the result in the frontend (ex Google Search Console)

Let's imagine this:
I have to download an XML document from an URL;
I have to elaborate this document and persist its information in the database, creating or updating a lot of entites.
I think the best way is to use queues. Or maybe I can also use cronjobs.
My problem is this: if I use the same app to do the heavy tasks and also to show to the end user the results of those heavy tasks, it may happen that the heavy tasks slow down the main website.
Take a more concrete example from real life: Google Search Console (or whatever other app that does heavy tasks and shows results to the end user).
Google Search Console gets the XML map, then starts downloading each webpage, on each webpage performs a lot of analysis, then saves the results to a database and so the end user can see the errors of his website and other useful information.
So, put as hypothesis I want to build again Google Search Console as a Symfony app, which are the possible approaches?
I think that for sure I have to use queues, but the app that downloads the webpages and the app that processes these webpages and the public frontend that shows the result of these operations are the same app or are two or three separate apps?
That is, have I to create a unique application that does all these things, or I create an app to download webpages, one other to process these webpages and one other to show to the user the results?
I'm thinking a lot at this, and I'm not able to find a good design to follow.
Because my istinct is to create multiple apps for each of those tasks to make the workings but it seems that create multiple Symfony apps isn't a good choice: Symfony 2 multiple apps?
So I really don't know which path to follow: multiple apps or one big app? And if I use one big app, should have I to use cronjobs or I have anyway use queues?
I will first detail the architecture :
You can do an Service like this :
namespace SomeBundle\Utils;
use SomeBundle\Entity\MyEntity;
use SomeBundle\myException;
class SomeService
{
private $var;
public function __construct()
{
}
public function doStuff()
{
// Do stuff
}
}
To debug you service, you can call it from a testController or basic Controller.
You can make Command like this :
namespace SomeBundle\Command;
use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
use SomeBundle\Utils\SomeService;
class SomeCommand extends ContainerAwareCommand
{
protected function configure()
{
$this->setName('some:command')
->setDescription('My awesome command');
}
protected function execute(InputInterface $input, OutputInterface $output)
{
$container = $this->getContainer();
$sevice = new SomeService($container);
$results = $service->doStuff();
}
}
You can do an Command Application like this :
require __DIR__.'/vendor/autoload.php';
require_once __DIR__.'/app/AppKernel.php';
$kernel = new AppKernel('dev', true);
use SomeBundle\Command\SomeCommand;
use Symfony\Bundle\FrameworkBundle\Console\Application;
$application = new Application($kernel);
$application->add(new SomeCommand());
$application->run();
Hope this helps !

How can I generate a file like this for Bing Heat Map data?

I am working on a fairly simple Heat Map application where the longitude and latitude of the points will be stored in a SQL Server database. I have been looking at an example that uses an array of objects as follows (eliminated a lot of data for brevity):
/* Sample data to demonstrate Bing Maps Heatmap */
/* http://alastair.wordpress.com */
var CrimeData = [
new Microsoft.Maps.Location(52.67280, 0.94392),
new Microsoft.Maps.Location(52.62423, 1.29493),
new Microsoft.Maps.Location(52.62187, 1.29080),
new Microsoft.Maps.Location(52.58962, 1.72228),
new Microsoft.Maps.Location(52.69915, 0.24332),
new Microsoft.Maps.Location(52.51161, 0.99350),
new Microsoft.Maps.Location(52.59573, 1.17067),
new Microsoft.Maps.Location(52.94351, 0.49153),
new Microsoft.Maps.Location(52.64585, 1.73145),
new Microsoft.Maps.Location(52.75424, 1.30079),
new Microsoft.Maps.Location(52.63566, 1.27176),
new Microsoft.Maps.Location(52.63882, 1.23121)
];
What I want to do is present the user with a list of some sort that displays all the data sets that exist in the database (they each have a name associated with them) and then allow the user to check all or only a select few. I will then need to generate an array like the above to create the heat map. Any ideas on a good approach to this?
What you trying to achieve is more related to a web developement rather than only related to Bing Maps.
To summarize, you have multiple ways to do this but it really depends on what you are capable to do and what you need in the interface.
What process/technology?
First, you need to determine what process you want to follow to display the data and it will set the technology that you will use. The questions that you need to ask yourself are:
Do you want to be able to change the data sets dynamically without refreshing the whole page?
If yes, it means that you will have to use asynchronous data loading through a dedicated web service in order to avoid loading all the information at the initial load of the page.
Do you have lots of data to load?
If so, it might comfort you with asynchronous loading to avoid loading all data.
If not loading every elements in multiple arrays might be the simplest solution.
Implementation
So now, you want to create a web service to load the data asynchronously, you can take a look at the following websites :
http://www.asp.net/get-started
http://www.stefanprodan.com/2011/04/async-operations-with-jquery-ajax-and-asp-net-mvc/
There might be interesting other website, you will be able to find them. If needed, add comment and I'm sure the community will help you.
If you want to generate the data directly in the script, it could be simple as you can compose the JavaScript directly in your dynamically created HTML page (in your ASP.Net markup code or whatever technology you're using).

Retrieving information from a web page

My application is meant to speed up the retrieval of phone call information from our telephone system.
The best way to get this information is to create a new search on the telephone system's web interface and export the results to an Excel spreadsheet which my application then imports into a DataSet.
To get the export, from the login screen, the process goes as follows:
Log in
Navigate to Reports Page
Click "Extension Detail" link
Select "Extensions" CheckBox
Select the extensions (typically all the ones currently being used) from the listbox
Specify date range
Click on Export button
It's not a big job to do it manually every day, but, for reliability, it would be great if I can make my application do this automatically the first time it starts every day.
Since more than 1 person in the company is going to use this application, having a Windows Service do it would be even better.
I don't know if it'll help, but the system is Datatex Topaz Next Generation telephone management system: http://www.datatex.co.za/downloads/index.html#TNG
Can anyone give me a basic idea how to do this?
Also, can anyone post links (in comments if need be) to pages where I can learn more about how to do this?
I have done the something similar to fetch info from a website. I cannot give you a exact answer. But the idea is to send login info to the page with form values. If the site is relying on cookies, you can use this cookie aware WebClient:
public class CookieAwareWebClient : WebClient
{
private CookieContainer cookieContainer = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest request = base.GetWebRequest(address);
if (request is HttpWebRequest)
{
(request as HttpWebRequest).CookieContainer = cookieContainer;
}
return request;
}
}
You should be aware that some sites rely on a session id being passed so the first thing I did was to fetch the session id from the page:
var client = new CookieAwareWebClient();
client.Encoding = Encoding.UTF8;
var indexHtml = client.DownloadString(*index page url*);
string sessionID = fetchSessionID(indexHtml);
Then I had to log in to the page which you can do by uploading values to the page. You can see the specific form elements with "view source" but you have to know a little HTML to do so.
var values = new NameValueCollection();
values.Add("sessionid", sessionID); //Fetched session id
values.Add("brugerid", args[0]); //Username in my case
values.Add("adgangskode", args[1]); //Password in my case
values.Add("login", "Login"); //The login button
//Logging in
client.UploadValues(*url to login*, values); //If all goes perfect, I'm logged in now
And then I could download the page I needed. In your case you may use DownloadFile(...) if the file always have the same url (something like Export.aspx?From=2010-10-10&To=2010-11-11) or UploadValues(...) where you specify the values as before but saves the result.
string html = client.DownloadString(*url*);
It seems you have a lot more steps than I did. But the principle is the same. To see what values your send to the site to login etc. you can use programs such as Fiddler (windows) which can capture the activity going on. Essential you just do exactly the same thing but watch out for session id etc. which is temporary.
The best idea is really to use some native way to fetch data, but if don't got the code, database etc. you have to do it the ugly way. You may also need a HTML parser to fetch the data (ups, you don't because you export to a file). And last but not least, keep in mind that pages can change and there is great potential to fail to login, parse etc.
Please ask for if you are uncertain what is going on.
ADDITION
The CookieAwareWebClient is not my code:
http://code.google.com/p/gardens/source/browse/Montrics/Physical.MyPyramid/CookieAwareWebClient.cs?r=26
Using CookieContainer with WebClient class
I also found some relevant threads:
What's a good tool to screen-scrape with Javascript support?
http://forums.asp.net/t/1475637.aspx
With a HTTP client, you need to do the following:
Log in, using cookies or HTTP authentication
Request a page
Submit form data
This means that you need some class or component in your program that can do HTTP, cookies, authentication and forms. With this, you do the same requests a user would do.