Using an Excel file in ArcMap 10.3 - gis

I need to take an Excel file that includes many columns, two of which are longitude and latitude.
How do I get ArcMap to accept this file as spatial data, and map it based on the lat/long data?
My data is from this page which allows for developers to access the raw data. I downloaded the data and loaded it into an excel file, and that's as far as I could get.

What you're looking for is Add XY Data. You can find it in the File menu (File / Add Data / Add XY Data...)
The dialog box that comes up asks you to indicate the table that was added, what columns contain XY data, and (ideally) the coordinate system of the XY data.
Note: Sometimes it helps to convert an Excel spreadsheet to plain CSV data first; ArcMap can be finicky about fields formatted as text instead of numbers, for example.

Add XY data will do the job. Just make sure that the values of latitude and longitude do not have trailing whitespace, otherwise, ArcMap doesn't show those columns when it prompts to choose the columns for x and y.

Related

receive Excel data and turn into objects to format a JSON

I have this solution that helps me creating a Wizard to fill some data and turn into JSON, the problem now is that I have to receive a xlsx and turn specific data from it into JSON, not all the data but only the ones I want which are documented in the last link.
In this link: https://stackblitz.com/edit/xlsx-to-json I can access the excel data and turn into object (when I print document.getElementById('output').innerHTML = JSON.parse(dataString); it shows [object Object])
I want to implement this solution and automatically get the specified fields in the config.ts but can't get to work. For now, I have these in my HTML and app-component.ts
https://stackblitz.com/edit/angular-xbsxd9 (It's probably not compiling but it's to show the code only)
It wasn't quite clear what you were asking, but based on the assumption that what you are trying to do is:
Given the data in the spreadsheet that is uploaded
Use a config that holds the list of column names you want returned in the JSON when the user clicks to download
based on this, I've created a fork of your sample here -> Forked Stackbliz
what I've done is:
use the map operator on the array returned from the sheet_to_json method
Within the map, the process is looping through each key of the record (each key being a column in this case).
If a column in the row is defined in the propertymap file (config), then return it.
This approach strips out all columns you don't care about up front. so that by the time the user clicks to download the file, only the columns you want are returned. If you need to maintain the original columns, then you can move this logic somewhere more convenient for you.
I also augmented the property map a little to give you more granular control over how to format the data in the returned JSON. i.e. don't treat numbers as strings in the final output. you can use this as a template if it suites your needs for any additional formatting.
hope it helps.

PEGA relief map - data table exported to CSV using JSON

New job, new platform: Pega. So Pega is being used to host a relief map of ranchers and farmers for a conservation project. A data table can be generated with about 20-22 columns. The developer told me that the export to CSV uses a function exposed by Pega to convert a JSON payload to a .CSV file
the problem is, empty cells in the data table on the browser have a "-" (the dash symbol) because our client doesn't want empty data cells. The dashes are not carried over to the .CSV export
the second problem is that a column that displays that data in quotations is being exported to .CSV with double quotations.
I have been going over to the code in developer view trying to figure out where and how that data is being exported that particular way but I can't find the exact string.
Does anyone else have experience with Pega using a JSON command to export a .CSV file?

Building a classifier with J48

Weka is meant to make it very easy to build classifiers. There are many different kinds, and here I want to use a scheme called “J48” that produces decision trees.
Weka can read Comma Separated Values (.csv) format files by selecting the appropriate File Format in the Open file dialog.
I've created a small spreadsheet file (see the next image), saved it .csv format, and loaded it into Weka.
The first row of the .csv file have the attribute names, separated by commas, which for this case is classe real and resultado modelo.
I've got the dataset opened in the Explorer.
If I go to the Classify panel, choose a classifier, open trees and click J48, i should just run it (I have the dataset, the classifier). (see the next image)
Well, it doesn't allow to press start. (see the next image)
What do I need to do to fix this?
If you look back at the Preprocess, you will see that resultado modelo is probably being treated as a numeric attribute. J48 only works with nominal class attributes. (Predictor attributes can be numeric, as a commenter #nekomatic noted.)
You can change this by using a filter in the Preprocess tab. Choose the unsupervised attribute filter NumericToNominal and this will convert all your variables (or a subset of them) from numeric to nominal. Then you should be able to run J48 just fine.

Recommended binary format for point cloud + metadata?

I'm trying to find a standard (or, at least, not completely obscure) binary data format with which to represent an X/Y/Z+R/G/B point cloud, with the added requirement that I need to have some additional metadata attached to each point. Specifically, I need to attach zero or more "source" attributes, each of which is an integer, to each point.
Is there an existing binary data format which is well-suited to this? Or, perhaps, would it be wiser for me to go with two separate data files, where the metadata just refers to the points in the cloud by their index into the full list of points?
From what I can tell, the PLY format allows arbitrary length lists of attributes attached to each element and can have either ASCII or "binary" format:
http://www.mathworks.com/matlabcentral/fx_files/5459/1/content/ply.htm
As far as I know, PCD only allows a fixed number of fields:
http://pointclouds.org/documentation/tutorials/pcd_file_format.php

Data format and X/Y configuration for dynamic csv display

I am trying to understand the demo at http://www.highcharts.com/demo/line-ajax which plots data using a csv file fetched through an ajax call.
Highcharts appears to assume that first column is X axis, with subsequent columns being Y axes data with the same units. The series part in the provided jsfiddle can be completely removed and example still runs the same, so I believe its not being used when data csv property is set. I was also unable to find any explanation of the csv property in Highcharts API docs.
Note: This example is using a different approach from other csv documentation on the site.
The csv format allowed does not seem to support double quotes. Also, I can get data with 2 columns to render, but I was wondering if there is any way to tell Highcharts to use 2 particular columns for X & Y axes while ignoring other columns in the input csv.
Is there a good reference to configuration settings available when using data csv property as in the given example?
Is there any example of a composite chart with spline & scatter plots created dynamically from csv data fetched thru ajax?
Docs for data.js plugin is described inside the data.js file.
And demo for spline and scatter: http://jsfiddle.net/3qv11owm/
series: [{
type: 'scatter'
}, {
type: 'spline'
}]
Options from series array are merged with series from CSV.