I have a work order in Maximo 7.6.1.1:
The WO has LatitudeY and LongitudeX coordinates in the Service Address tab.
The WO has a custom zone field.
And there is a feature class (polygons) in a separate GIS database.
I want to do spatial query to return an attribute from the polygon record that the WO intersects and use it to populate zone in the WO.
How can I do this?
Related keyword: Maximo Spatial
To do this live in Maximo using an automation script is possible or by writing custom code into Spatial (more challenging). You want to use the /MapServer/identify tool and post the geometry xy, coordinate system, and the layer you want to query. identify window
You will have to format the geometry object correctly and test your post from the window. I usually grab the post from the network section of developer tools once I get it to work and change the output format to json and use it in my code.
You may actually not need to touch your Maximo environment at all. How about just using a trigger on your work orders table ? That trigger can then automatically fill the zone ID from a simple select statement that matches x and y with the zones in the zones table. Here is how that could look like.
This assumes that your work orders are in a table like this:
create table work_orders (
wo_id number primary key,
x number,
y number,
zone_id number
);
and the zones in a table like this
create table zones (
zone_id number primary key,
shape st_geometry
)
Then the trigger would be like this
create or replace trigger work_orders_fill_zone
before insert or update of x,y on work_orders
for each row
begin
select zone_id
into :new.zone_id
from zones
where sde.st_contains (zone_shape, sde.st_point (:new.x, :new.y, 4326) ) = 1;
end;
/
Some assumptions:
The x and y columns contain coordinates in WGS84 longitude/latitude (not in some projection or some other long/lat coordinate system)
Zones don't overlap: a work order point is always therefore in one and only one zone. If not, then the query may return multiple results, which you then need to handle.
Zones fully cover the territory your work orders can take place in. If a work order location can be outside all your zones, then you also need to handle that (the query would return no result).
The x and y columns are always filled. If they are optional, then you also need to handle that case (set zone_id to NULL if either x or y is NULL)
After that, each time a new work order is inserted in the work_orders table, the zone_id column will be automatically updated.
You can initialize zone_id in your existing work orders with a simple update:
update work_orders set x=x, y=y;
This will make the trigger run for each row in the table ... It may take some time to complete if the table is large.
Adapt the code in the Library Scripts section of Maximo 76 Scripting Features (pdf):
#What the script does:
# 1. Takes the X&Y coordinates of a work order in Maximo
# 2. Generates a URL from the coordinates
# 3. Executes the URL via a separate script/library (LIB_HTTPCLIENT)
# 4. Performs a spatial query in an ESRI REST feature service (a separate GIS system)
# 5. Returns JSON text to Maximo with the attributes of the zone that the work
# order intersected
# 6. Parses the zone number from the JSON text
# 7. Inserts the zone number into the work order record
from psdi.mbo import MboConstants
from java.util import HashMap
from com.ibm.json.java import JSONObject
field_to_update = "ZONE"
gis_field_name = "ROADS_ZONE"
def get_coords():
"""
Get the y and x coordinates(UTM projection) from the WOSERVICEADDRESS table
via the SERVICEADDRESS system relationship.
The datatype of the LatitdeY and LongitudeX fields is decimal.
"""
laty = mbo.getDouble("SERVICEADDRESS.LatitudeY")
longx = mbo.getDouble("SERVICEADDRESS.LongitudeX")
#Test values
#laty = 4444444.7001941890
#longx = 666666.0312127020
return laty, longx
def is_latlong_valid(laty, longx):
#Verify if the numbers are legitimate UTM coordinates
return (4000000 <= laty <= 5000000 and
600000 <= longx <= 700000)
def make_url(laty, longx, gis_field_name):
"""
Assembles the URL (including the longx and the laty).
Note: The coordinates are flipped in the url.
"""
url = (
"http://hostname.port"
"/arcgis/rest/services/Example"
"/Zones/MapServer/15/query?"
"geometry={0}%2C{1}&"
"geometryType=esriGeometryPoint&"
"spatialRel=esriSpatialRelIntersects&"
"outFields={2}&"
"returnGeometry=false&"
"f=pjson"
).format(longx, laty, gis_field_name)
return url
def fetch_zone(url):
# Get the JSON text from the feature service (the JSON text contains the zone value).
ctx = HashMap()
ctx.put("url", url)
service.invokeScript("LIBHTTPCLIENT", ctx)
json_text = str(ctx.get("response"))
# Parse the zone value from the JSON text
obj = JSONObject.parse(json_text)
parsed_val = obj.get("features")[0].get("attributes").get(gis_field_name)
return parsed_val
try:
laty, longx = get_coords()
if not is_latlong_valid(laty, longx):
service.log('Invalid coordinates')
else:
url = make_url(laty, longx, gis_field_name)
zone = fetch_zone(url)
#Insert the zone value into the zone field in the work order
mbo.setValue(field_to_update, zone, MboConstants.NOACCESSCHECK)
service.log(zone)
except:
#If the script fails, then set the field value to null.
mbo.setValue(field_to_update, None, MboConstants.NOACCESSCHECK)
service.log("An exception occurred")
LIBHTTPCLIENT: (a reusable Jython library script)
from psdi.iface.router import HTTPHandler
from java.util import HashMap
from java.lang import String
handler = HTTPHandler()
map = HashMap()
map.put("URL", url)
map.put("HTTPMETHOD", "GET")
responseBytes = handler.invoke(map, None)
response = String(responseBytes, "utf-8")
Related
Is there a way to sort data and drop duplicates using pure pyarrow tables? My goal is to retrieve the latest version of each ID based on the maximum update timestamp.
Some extra details: my datasets are normally structured into at least two versions:
historical
final
The historical dataset would include all updated items from a source so it is possible to have duplicates for a single ID for each change that happened to it (picture a Zendesk or ServiceNow ticket, for example, where a ticket can be updated many times)
I then read the historical dataset using filters, convert it into a pandas DF, sort the data, and then drop duplicates on some unique constraint columns.
dataset = ds.dataset(history, filesystem, partitioning)
table = dataset.to_table(filter=filter_expression, columns=columns)
df = table.to_pandas().sort_values(sort_columns, ascending=True).drop_duplicates(unique_constraint, keep="last")
table = pa.Table.from_pandas(df=df, schema=table.schema, preserve_index=False)
# ds.write_dataset(final, filesystem, partitioning)
# I tend to write the final dataset using the legacy dataset so I can make use of the partition_filename_cb - that way I can have one file per date_id. Our visualization tool connects to these files directly
# container/dataset/date_id=20210127/20210127.parquet
pq.write_to_dataset(final, filesystem, partition_cols=["date_id"], use_legacy_dataset=True, partition_filename_cb=lambda x: str(x[-1]).split(".")[0] + ".parquet")
It would be nice to cut out that conversion to pandas and then back to a table, if possible.
Edit March 2022: PyArrow is adding more functionalities, though this one isn't here yet. My approach now would be:
def drop_duplicates(table: pa.Table, column_name: str) -> pa.Table:
unique_values = pc.unique(table[column_name])
unique_indices = [pc.index(table[column_name], value).as_py() for value in unique_values]
mask = np.full((len(table)), False)
mask[unique_indices] = True
return table.filter(mask=mask)
//end edit
I saw your question because I had a similar one, and I solved it for my work (due to IP issues I can't post the whole code but I'll try to answer as well as I can. I've never done this before)
import pyarrow.compute as pc
import pyarrow as pa
import numpy as np
array = table.column(column_name)
dicts = {dct['values']: dct['counts'] for dct in pc.value_counts(array).to_pylist()}
for key, value in dicts.items():
# do stuff
I used the 'value_counts' to find the unique values and how many of them there are (https://arrow.apache.org/docs/python/generated/pyarrow.compute.value_counts.html). Then I iterated over those values. If the value was 1, I selected the row by using
mask = pa.array(np.array(array) == key)
row = table.filter(mask)
and if the count was more then 1 I selected either the first or last one by using numpy boolean arrays as a mask again.
After iterating it was just as simple as pa.concat_tables(tables)
warning: this is a slow process. If you need something quick&dirty, try the "Unique" option (also in the same link I provided).
edit/extra:: you can make it a bit faster/less memory intensive by keeping up a numpy array of boolean masks while iterating over the dictionary. then in the end you return a "table.filter(mask=boolean_mask)".
I don't know how to calculate the speed though...
edit2:
(sorry for the many edits. I've been doing a lot of refactoring and trying to get it to work faster.)
You can also try something like:
def drop_duplicates(table: pa.Table, col_name: str) ->pa.Table:
column_array = table.column(col_name)
mask_x = np.full((table.shape[0]), False)
_, mask_indices = np.unique(np.array(column_array), return_index=True)
mask_x[mask_indices] = True
return table.filter(mask=mask_x)
The following gives a good performance. About 2mins for a table with half billion rows. The reason I don't do combine_chunks(): there is a bug, arrow seems can not combine chunk arrays if there size are too large. See details: https://issues.apache.org/jira/browse/ARROW-10172?src=confmacro
a = [len(tb3['ID'].chunk(i)) for i in range(len(tb3['ID'].chunks))]
c = np.array([np.arange(x) for x in a])
a = ([0]+a)[:-1]
c = pa.chunked_array(c+np.cumsum(a))
tb3= tb3.set_column(tb3.shape[1], 'index', c)
selector = tb3.group_by(['ID']).aggregate([("index", "min")])
tb3 = tb3.filter(pc.is_in(tb3['index'], value_set=selector['index_min']))
I found duckdb can give better performance on group by. Change the last 2 lines above into the following will give 2X speedup:
import duckdb
duck = duckdb.connect()
sql = "select first(index) as idx from tb3 group by ID"
duck_res = duck.execute(sql).fetch_arrow_table()
tb3 = tb3.filter(pc.is_in(tb3['index'], value_set=duck_res['idx']))
Following the basic steps to create Prophet model and forecast
m = Prophet(daily_seasonality=True)
m.fit(data)
forecast = m.make_future_dataframe(periods=2)
forecast.tail().T
the result is as following (no yhat value ??)
The data passed in to fit the model has two columns (date and value).
Not sure what I have missed out here.
I managed to get it works by creating a new dataframe:
df_p = pd.DataFrame({'ds': d.index, 'y': d.values})
I'm trying to break this problem down into manageable parts: Spatial Query.
I want to create an automation script that will put a work order's LatitudeY coordinate in the work order's DESCRIPTION field.
I understand that a work order's coordinates are not stored in the WORKORDER table; they're stored in the WOSERVICEADDRESS table.
Therefore, I believe the script needs to reference a relationship in the Database Configuration application that will point to the related table.
How can I do this?
(Maximo 7.6.1.1)
You can get the related Mbo and get the values from the related Mbo and use it as shown in the below code. By getting the related Mbo you can also alter it's attributes.
from psdi.mbo import MboConstants
serviceAddressSet = mbo.getMboSet("SERVICEADDRESS")
if(serviceAddressSet.count() > 0):
serviceAddressMbo = serviceAddressSet.moveFirst()
latitudeY = serviceAddressMbo.getString("LATITUDEY")
longitudeX = serviceAddressMbo.getString("LONGITUDEX")
mbo.setValue("DESCRIPTION","%s, %s" % (longitudeX, latitudeY),MboConstants.NOACCESSCHECK)
serviceAddressSet.close()
I've got a sample script that compiles successfully:
from psdi.mbo import MboConstants
wonum = mbo.getString("WONUM")
mbo.setValue("DESCRIPTION",wonum,MboConstants.NOACCESSCHECK)
I can change it to get the LatitudeY value via the SERVICEADDRESS relationship:
from psdi.mbo import MboConstants
laty = mbo.getString("SERVICEADDRESS.LatitudeY")
longx = mbo.getString("SERVICEADDRESS.LONGITUDEX")
mbo.setValue("DESCRIPTION",laty + ", " + longx,MboConstants.NOACCESSCHECK)
This appears to work.
I have a script that scans a DynamoDB table that stores my instance IDs. Then I try to query another table to see if it also has that same instance and get all of the metadata attributes in a master table. When I iterate through the query using the instance ID from the initial scan of the first table, I am noticing each character of the instance id string is being printed to a new line, instead of the entire string on one line. I am confused how to fix this. Below is my code, sample output, and the expected output.
CODE:
import boto3
import json
from boto3.dynamodb.conditions import Key, Attr
def table_diff():
dynamo = boto3.client('dynamodb')
dynamodb = boto3.resource('dynamodb')
table_missing = dynamodb.Table('RunningInstances')
missing_response = dynamo.scan(TableName='CWPMissingAgent')
for instances in missing_response['Items']:
instance_id = instances['missing_instances']['S']
# This works how I want, prints i-xxxxx
print(instance_id)
for id in instance_id:
# This does not print how I want (vertically)
print(id)
query_response = table_missing.query(KeyConditionExpression=Key('ID').eq(id))
OUTPUT:
i
-
x
x
x
x
x
EXPECTED OUTPUT:
i-xxxxx
etc etc
instance_id is a string. Thus, when you loop over it (for id in instance_id), you are actually looping over each character in the string, and printing them out individually.
Why do you try to loop over it, when you say that just printing it produces the correct result?
Using Python 2.7, I wont to pass a query from BigQuery to ML Predict which has a specific formating request.
First: Is there an easier way to go directly from the BigQuery query to JSON in the correct format so it can be passed to requests.post() instead of going through pandas (from what I understand pandas is still not supported for GCP Standard)?
Second: Is there a way to construct the query to go directly to a JSON format and then modify the JSON to reflect the ML Predict JSON requirments?
Currently my code looks like this:
#I used the bigquery to dataframe option here to view the output.
#I would like to not use pandas in the end code.
logs = log_data.execute(output_options=bq.QueryOutput.dataframe()).result()
data = logs.to_json(orient='index')
print data
'{"0":{"end_time":"2018-04-19","device":"iPad","device_os":"iOS","device_os_version":"5.1.1","latency":0.150959,"megacycles":140.0,"cost":"1.3075e-08","device_brand":"Apple","device_family":"iPad","browser_version":"5.1","app":"567","ua_parse":"0"}}'
#The JSON needs to be in this format according to google documentation.
#data = {
# 'instances': [
# {
# 'key':'',
# 'end_time': '2018-04-19',
# 'device': 'iPad',
# 'device_os': 'iOS',
# 'device_os_version': '5.1.1',
# 'latency': 0.150959,
# 'megacycles':140.0,
# 'cost':'1.3075e-08',
# 'device_brand':'Apple',
# 'device_family':'iPad',
# 'browser_version':'5.1',
# 'app':'567',
# 'ua_parse':'40.9.8'
# }
# ]
#}
So all I would need to change is the leading key '0' to 'instances' and I should be all set to pass into `requests.post().
Is there a way to accomplish this?
Edit-Adding BigQuery query:
%%bq query --n log_data
WITH `my.table` AS (
SELECT ARRAY<STRUCT<end_time STRING, device STRING, device_os STRING, device_os_version STRING, latency FLOAT64, megacycles FLOAT64,
cost STRING, device_brand STRING, device_family STRING, browser_version STRING, app STRING, ua_parse STRING>>[] instances
)
SELECT TO_JSON_STRING(t)
FROM `my.table` AS t
WHERE end_time >='2018-04-19'
LIMIT 1
data = log_data.execute().result()
Thanks to #MikhailBerlyant I have adjust my query and code to look like this:
%%bq query --n log_data
SELECT [TO_JSON_STRING(t)] AS instance
FROM `yourproject.yourdataset.yourtable` AS t
WHERE end_time >='2018-04-19'
LIMIT 1
But when I run the execute logs = log_data.execute().result() I get this
Which results in this error when passing into request.post
TypeError: QueryResultsTable job_zfVEiPdf2W6msBlT6bBLgMusF49E is not JSON serializable
Is there a way within execut() to just return the json?
First: Is there an easier way to go directly from the BigQuery query to JSON in the correct format
See example below
#standardSQL
WITH yourTable AS (
SELECT ARRAY<STRUCT<id INT64, type STRING>>[(1, 'abc'), (2, 'xyz')] instances
)
SELECT TO_JSON_STRING(t)
FROM yourTable t
with result is in the format you asked for:
{"instances":[{"id":1,"type":"abc"},{"id":2,"type":"xyz"}]}
Above demonstrates the query and how it will work
In you real case - you should use something like below
SELECT TO_JSON_STRING(t)
FROM `yourproject.yourdataset.yourtable` AS t
WHERE end_time >='2018-04-19'
LIMIT 1
hope this helps :o)
Update based on comments
SELECT [TO_JSON_STRING(t)] AS instance
FROM `yourproject.yourdataset.yourtable` t
WHERE end_time >='2018-04-19'
LIMIT 1
I wanted to add this in case someone has the same problem I had or at least are stuck on were to go once you have the query.
I was able to write a function that formatted the query in the way Google ML Predict wants it to be passed into requests.post(). This is most likely a horrible way to accomplish this but I could not find a direct way to go from BigQuery to ML Predict in the correct format.
def logs(query):
client = gcb.Client()
query_job = client.query(query)
CSV_COLUMNS ='end_time,device,device_os,device_os_version,latency,megacycles,cost,device_brand,device_family,browser_version,app,ua_parse'.split(',')
for row in query_job.result():
var = list(row)
l1 = dict(zip(CSV_COLUMNS,var))
l1.update({'key':''})
l2 = {'instances':[l1]}
return l2