How to convert tsurf file (ts format) to a raster file for arcmap - gis

**I have a tsurf (.ts file), I want to know how we can convert it to a format so that it can be opened in ARCMAP in a form of a raster.
My .ts file contains triangulated data points. **
Its in a format as given below...
.......
GOCAD_ORIGINAL_COORDINATE_SYSTEM
NAME Default
AXIS_NAME "X" "Y" "Z"
AXIS_UNIT "m" "m" "m"
ZPOSITIVE Elevation
END_ORIGINAL_COORDINATE_SYSTEM
GEOLOGICAL_FEATURE bisop64
GEOLOGICAL_TYPE top
STRATIGRAPHIC_POSITION creta 5
TFACE
VRTX 1 473500 3771000 -1103.3717041015625
VRTX 2 473750 3771000 -1087.019775390625
VRTX 3 473500 3770750 -1128.013427734375
VRTX 4 473750 3770750 -1142.8648681640625
VRTX 5 473250 3770750 -1128.40283203125
VRTX 6 473750 3771250 -1025.1702880859375
...............
I tried to look at TIN files in arcmap but have been unsucessful so far.
Any help in this regard would be highly appreciated.

Related

Why neo4j is not adding a new line with \n character coming in data from csv?

I am having some data coming from csv which has \n character in it and I expect neo4j to add a new line when assigning that string to some attribute in node. Apparently its not working. I can see \n character as it is added in the string.
How to make it work? Thanks in Advance.
Following is one such string example from CSV:
Combo 4 4 4 5 \n\nSpare Fiber Inventory. \nMultimode Individual fibers from 9927/9928 to FDB.\nNo available spares from either BTS to FDB - New conduits would be required\n\nFrom FDB to tower top. 9 of 9 Spares available on 2.5 riser cables.
My load command:
USING PERIODIC COMMIT 500
LOAD CSV WITH HEADERS
FROM 'file:///abc.csv' AS line
WITH line WHERE line.parent <> "" AND line.type = 'LSD' AND line.parent_type = 'XYZ'
This is a hack that I made to replace the occurrences of \n with a newline. The character \ is an escape character so it will replace \n with a new line in line 4. Do not remove line 5 and combine with line 4.
LOAD CSV WITH HEADERS
FROM 'file:///abc.csv' AS line
WITH line WHERE line.parent <> ""
WITH replace(line.parent,'\\n',"
") as parent
MERGE (p:Parent {parent: parent})
RESULT:
{
"identity": 16,
"labels": [
"Parent"
],
"properties": {
"parent": "Combo 4 4 4 5
Spare Fiber Inventory.
Multimode Individual fibers from 9927/9928 to FDB.
No available spares from either BTS to FDB - New conduits would be required
From FDB to tower top. 9 of 9 Spares available on 2.5 riser cables."
}
}

NetLogo - using BehaviorSpace get all turtles locations as the result of each repetition

I am using BehaviorSpace to run the model hundreds of times with different parameters. But I need to know the locations of all turtles as a result instead of only the number of turtles. How can I achieve it with BehaviorSpace?
Currently, I output the results in a csv file by this code:
to-report get-locations
report (list xcor ycor)
end
to generate-output
file-open "model_r_1.0_locations.csv"
file-print csv:to-row get-locations
file-close
end
but all results are popped into same csv file, so I can't tell the condition of each running.
Seth's suggestion of incorporating behaviorspace-run-number in the filename of your csv output is one alternative. It would allow you to associate that file with the summary data in your main BehaviorSpace output file.
Another option is to include list reporters as "measures" in your behavior space experiment definition. For example, in your case:
map [ t -> [ xcor ] of t ] sort turtles
map [ t -> [ ycor ] of t ] sort turtles
You can then parse the resulting list "manually" in your favourite data analysis language. I've used the following function for this before, in Julia:
parselist(strlist, T = Float64) = parse.(T, split(strlist[2:end-1]))
I'm sure you can easily write some equivalent code in Python or R or whatever language you're using.
In the example above, I've outputted separate lists for the xcor and the ycor of turtles. You could also output a single "list of lists", but the parsing would be trickier.
Edit: How to do this using the csv extension and R
Coincidentally, I had to do something similar today for a different project, and I realized that a combination of the csv extension and R can make this very easy.
The general idea is the following:
In NetLogo, use csv:to-string to encode list data into a string and then write that string directly in the BehaviorSpace output.
In R, use purrr::map and readr::read_csv, followed by tidyr::unnest, to unpack everything in a neat "one observation per row" dataframe.
In other words: we like CSV, so we put CSV in our CSV so we can parse while we parse.
Here is a full-fledged example. Let's say we have the following NetLogo model:
extensions [ csv ]
to setup
clear-all
create-turtles 2 [ move-to one-of patches ]
reset-ticks
end
to go
ask turtles [ forward 1 ]
tick
end
to-report positions
let coords [ (list who xcor ycor) ] of turtles
report csv:to-string fput ["who" "x" "y"] coords
end
We then define the following tiny BehaviorSpace experiment, with only two repetitions and a time limit of two, using our positions reporter as an output:
The R code to process this is pleasantly straightforward:
library(tidyverse)
df <- read_csv("experiment-table.csv", skip = 6) %>%
mutate(positions = map(positions, read_csv)) %>%
unnest()
Which results in the following dataframe, all neat and tidy:
> df
# A tibble: 12 x 5
`[run number]` `[step]` who x y
<int> <int> <int> <dbl> <dbl>
1 1 0 0 16 10
2 1 0 1 10 -2
3 1 1 1 9.03 -2.24
4 1 1 0 -16.0 10.1
5 1 2 1 8.06 -2.48
6 1 2 0 -15.0 10.3
7 2 0 1 -14 1
8 2 0 0 13 15
9 2 1 0 14.0 15.1
10 2 1 1 -13.7 0.0489
11 2 2 0 15.0 15.1
12 2 2 1 -13.4 -0.902
The same thing in Julia:
using CSV, DataFrames
df = CSV.read("experiment-table.csv", header = 7)
cols = filter(col -> col != :positions, names(df))
df = by(df -> CSV.read(IOBuffer(df[:positions][1])), df, cols)

read_csv file in pandas reads whole csv file in one column

I want to read csvfile in pandas. I have used function:
ace = pd.read_csv('C:\\Users\\C313586\\Desktop\\Daniil\\Daniil\\ACE.csv',sep = '\t')
And as output I got this:
a)First row(should be header)
_AdjustedNetWorthToTotalCapitalEmployed _Ebit _StTradeRec _StTradePay _OrdinaryCf _CfWorkingC _InvestingAc _OwnerAc _FinancingAc _ProdValueGrowth _NetFinancialDebtTotalAdjustedCapitalEmployed_BanksAndOtherInterestBearingLiabilitiesTotalEquityAndLiabilities _NFDEbitda _DepreciationAndAmortizationProductionValue _NumberOfDays _NumberOfDays360
#other rows separated by tab
0 5390\t0000000000000125\t0\t2013-12-31\t2013\tF...
1 5390\t0000000000000306\t0\t2015-12-31\t2015\tF...
2 5390\t00000000000003VG\t0\t2015-12-31\t2015\tF...
3 5390\t0000000000000405\t0\t2016-12-31\t2016\tF...
4 5390\t00000000000007VG\t0\t2013-12-31\t2013\tF...
5 5390\t0000000000000917\t0\t2015-12-31\t2015\tF...
6 5390\t00000000000009VG\t0\t2016-12-31\t2016\tF...
7 5390\t0000000000001052\t0\t2015-12-31\t2015\tF...
8 5390\t00000000000010SG\t0\t2015-12-31\t2015\tF...
Do you have any ideas why it happens? How can I fix it?
You should use the argument sep=r'\t' (note the extra r). This will make pandas search for the exact string \t (the r stands for raw)

Invalid subscript "list" error when converting from JSON to Dataframe using R

I was following the instructions mentioned in the following question to convert JSON data to a dataframe using RJSONIO package. Link below:
How to convert JSON to Dataframe
Below is the JSON summary of my data, each field contains equal number of values, somewhere around 50,000. The value in color field is of type list, my guess is that is what is causing the problem.
json
title: chr
remaining: chr
color: list()
brand: chr
modelnum: chr
size: chr
I am attaching a sample set of JSON values, if anyone on the community can shed some light on how to model this into a dataframe, it'll be great!
Sample JSON data:
{"title":"oneplus 3","remaining":"","color":[],"brand":"OnePlus","modelnum":"OnePlus 3","size":""}
{"title":"oneplus 3 (soft gold, 64 gb)","remaining":"(soft )","color":["gold"],"brand":"OnePlus","modelnum":"OnePlus 3","size":"64 gb"}
{"title":"deal 1:oneplus 3 (graphite, 64gb) 6gb ram 4g lte - 1 year manufacture warranty","remaining":"deal 1: 6gb ram 4g lte - 1 year manufacture warranty","color":["graphite"],"brand":"OnePlus","modelnum":"OnePlus 3","size":"64gb"}
{"title":"oneplus 3 (graphite, 64 gb)","remaining":"","color":["graphite"],"brand":"OnePlus","modelnum":"OnePlus 3","size":"64 gb"}
{"title":"xiaomi redmi note 3 32gb","remaining":"","color":[],"brand":"Xiaomi","modelnum":"Redmi Note 3","size":"32gb"}
{"title":"xiaomi redmi note 3 (grey 32 gb) mobile phone","remaining":"mobile phone","color":["grey"],"brand":"Xiaomi","modelnum":"Redmi Note 3","size":"32 gb"}
{"title":"xiaomi redmi note 3 new (6 month brand warranty)","remaining":"new (6 month brand warranty)","color":[],"brand":"Xiaomi","modelnum":"Redmi Note 3","size":""}
{"title":"xiaomi redmi note 3 (gold 32gb) mobile phone","remaining":"mobile phone","color":["gold"],"brand":"Xiaomi","modelnum":"Redmi Note 3","size":"32gb"}
{"title":"xiaomi redmi note 3 (dark grey) (32gb)","remaining":"","color":["dark grey"],"brand":"Xiaomi","modelnum":"Redmi Note 3","size":"32gb"}
{"title":"mi redmi note 3 32gb dark grey","remaining":"mi","color":["dark grey"],"brand":"Xiaomi","modelnum":"Redmi Note 3","size":"32gb"}
{"title":"xiaomi redmi note 3 (gold, 32gb)","remaining":"","color":["gold"],"brand":"Xiaomi","modelnum":"Redmi Note 3","size":"32gb"}
R-code:
library(RJSONIO)
json <- fromJSON(file_path_for_the_above_data, nullValue = NA)
dat <- lapply(json, function(j) {
as.data.frame(replace(j, sapply(j, is.list), NA))
})
This is where the error occurs.
Error in replace(j, sapply(j, is.list), NA) :
invalid subscript type 'list'
Thank you.
The issue is with the wrong format of JSON, fixing the JSON array basically did the trick.

Writing a list of lists to file, removing unwanted characters and a new line for each

I have a list "newdetails" that is a list of lists and it needs to be written to a csv file. Each field needs to take up a cell (without the trailing characters and commas) and each sublist needs to go on to a new line.
The code I have so far is:
file = open(s + ".csv","w")
file.write(str(newdetails))
file.write("\n")
file.close()
This however, writes to the csv in the following, unacceptable format:
[['12345670' 'Iphone 9.0' '500' 2 '3' '5'] ['12121212' 'Samsung Laptop' '900' 4 '3' '5']]
The format I wish for it to be in is as shown below:
12345670 Iphone 9.0 500 5 3 5
12121212 Samsung Laptop 900 5 3 5
You can use csv module to write information to csv file.
Please check below links:
csv module in Python 2
csv module in Python 3
Code:
import csv
new_details = [['12345670','Iphone 9.0','500',2,'3','5'],
['12121212','Samsung Laptop','900',4,'3','5']]
import csv
with open("result.csv","w",newline='') as fh
writer = csv.writer(fh,delimiter=' ')
for data in new_details:
writer.writerow(data)
Content of result.csv:
12345670 "Iphone 9.0" 500 2 3 5
12121212 "Samsung Laptop" 900 4 3 5