In the Anylogic GIS environment, I want to send a message to the agents that are close to the agent that sends the message.
Is it possible to send a message to more than one agent within a certain distance in a GIS environment?
Thanks,
I used send("msg", this.getNearestAgent(agents)); but it only sends the message to the one nearest agent and not the nearest ones.
You will need to create your own function for this. I created the following one with the agent type called MyAgent so replace MyAgent with whatever Agent Type you are using:
for( MyAgent m : myAgents ) {
if( agent.distanceTo(m) < 1000000 && agent != m ) {
send("msg", m);
}
}
Note that distanceTo returns a distance in meters. I also the second condition in the if statement to make sure the agent doesn't send a message to itself. Finally, make sure to add an argument to your function as shown below.
Related
AWS Step Function
My problem is to how to sendTaskSuccess or sendTaskFailuer to Activity which are running under the state machine in AWS .
My Actual intent is to Notify the specific activities which belongs to particular State machine execution.
I successfully send notification to all waiting activities by activityARN. But my actual need is to send notification to specific activity which belong to particular state machine execution .
Example . StateMachine - SM1
There two execution on going for SM1-- SM1E1, SM1E2 . In that case I want to sendTaskSuccess to activity which belongs to SM1E1 .
follwoing code i used . But it send notification to all activities
GetActivityTaskResult getActivityTaskResult = client.getActivityTask(new GetActivityTaskRequest()
.withActivityArn("arn detail"));
if (getActivityTaskResult.getTaskToken() != null) {
try {
JsonNode json = Jackson.jsonNodeOf(getActivityTaskResult.getInput());
String outputResult = patientRegistrationActivity.setStatus(json.get("patientId").textValue());
System.out.println("outputResult " + outputResult);
SendTaskSuccessRequest sendTaskRequest = new SendTaskSuccessRequest().withOutput(outputResult)
.withTaskToken(getActivityTaskResult.getTaskToken());
client.sendTaskSuccess(sendTaskRequest);
} catch (Exception e) {
client.sendTaskFailure(
new SendTaskFailureRequest().withTaskToken(getActivityTaskResult.getTaskToken()));
}
As far as I know you have no control over which task token is returned. You may get one for SM1E1 or SM1E2 and you cannot tell by looking at the task token. GetActivityTask returns "input" so based on that you may be able to tell which execution you are dealing with but if you get a token you are not interested in, I don't think there's a way to put it back so you won't be able to get it again with GetActivityTask later. I guess you could store it in a database somewhere for use later.
One idea you can try is to use the new callback integration pattern. You can specify the Payload parameter in the state definition to include the task token like this token.$: "$$.Task.Token" and then use GetExecutionHistory to find the TaskScheduled state of the execution you are interested in and retrieve the parameters.Payload.token value and then use that with sendTaskSuccess.
Here's a snippet of my serverless.yml file that describes the state
WaitForUserInput: #Wait for the user to do something
Type: Task
Resource: arn:aws:states:::lambda:invoke.waitForTaskToken
Parameters:
FunctionName:
Fn::GetAtt: [WaitForUserInputLambdaFunction, Arn]
Payload:
token.$: "$$.Task.Token"
executionArn.$: "$$.Execution.Id"
Next: DoSomethingElse
I did a POC to check and below is the solution .
if token is consumed by getActivityTaskResult.getTaskToken() and if your conditions not satisfied by request input then you can use below line to avoid token consumption .awsStepFunctionClient.sendTaskHeartbeat(new SendTaskHeartbeatRequest().withTaskToken(taskToken))
Motivation
Reduce the maintenance of an Azure DevOps task that invokes a Powershell script with a lot of parameters ("a lot" could be 5).
The idea relies on the fact that Azure DevOps generates environment variables to reflect the build variables. So, I devised the following scheme:
Prefix all non secret Azure DevOps variables with MyBuild.
The task powershell script would call a function to check the script parameters against the MyBuild_ environment variables and would automatically assign the value of the MyBuild_xyz environment variable to the script parameter xyz if the latter has no value.
This way the task command line would only contain secret parameters (which are not reflected in the environment). Often, there are no secret parameters and so the command line remains empty. We find this scheme to reduce the maintenance of the tasks driven by a powershell script.
Example
param(
$DBUser,
[ValidateNotNullOrEmpty()]$DBPassword,
$DBServer,
$Configuration,
$Solutions,
$ClientDB = $env:Build_DefinitionName,
$RawBuildVersion = $env:Build_BuildNumber,
$BuildDefinition = $env:Build_DefinitionName,
$Changeset = $env:Build_SourceVersion,
$OutDir = $env:Build_BinariesDirectory,
$TempDir,
[Switch]$EnforceNoMetadataStoreChanges
)
$ErrorActionPreference = "Stop"
. $PSScriptRoot\AutomationBootstrap.ps1
$AutomationScripts = GetToolPackage DevOpsAutomation
. "$AutomationScripts\vNext\DefaultParameterValueBinding.ps1" $PSCommandPath -Required 'ClientDB' -Props #{
OutDir = #{ DefaultValue = [io.path]::GetFullPath("$PSScriptRoot\..\..\bin") }
TempDir = #{ DefaultValue = 'D:\_gctemp' }
DBUser = #{ DefaultValue = 'SomeUser' }
}
The described parameter binding logic is implemented in the script DefaultParameterValueBinding.ps1 which is published in a NuGet package. The code installs the package and thus gets access to the script.
In the example above, some parameters default to predefined Azure Devops variables, like $RawBuildVersion = $env:Build_BuildNumber. Some are left uninitialized, like $DBServer, which means it would default to $env:MyBuild_DBServer.
We can get away without the special function to do the binding, but then the script author would have to write something like this:
$DBServer = $env:MyBuild_DBServer,
$Configuration = $env:MyBuild_Configuration,
$Solutions = $env:MyBuild_Solutions,
I wanted to avoid this, because of the possibility of an accidental name mismatch.
The Problem
The approach does not work when I package the logic of DefaultParameterValueBinding.ps1 into a module function. This is because of the module scope isolation - I just cannot modify the parameters of the caller script.
Is it still possible to do? Is it possible to achieve my goal in a more elegant way? Remember, I want to reduce the cost associated with maintaining the task command line in Azure DevOps.
Right now I am inclined to retreat back to this scheme:
$xyz = $(Resolve-ParameterValue 'xyz' x y z ...)
Where Resolve-ParameterValue would first check $env:MyBuild_xyz and if not found select the first not null value out of x,y,z,...
But if the Resolve-ParameterValue method comes from a module, then the script must assume the module has already been installed, because it has no way to install it before the parameters are evaluated. Or has it?
EDIT 1
Notice the command line used to invoke the DefaultParameterValueBinding.ps1 script does not contain the caller script parameters! It does include $PSCommandPath, which is used to obtain the PSBoundParameters collection.
Yea, but it will require modifications to the calling script and the function. Pass the parameters by reference. Adam B. has a nice piece on passing parameters by reference in the following:
https://mcpmag.com/articles/2015/06/04/reference-variables-in-powershell.aspx
Net-net, the following is an example:
$age = 12;
function birthday {
param([ref]$age)
$age.value += 1
}
birthday -age ([ref]$age)
Write-Output $age
I've got an age of 12. I pass it into a function as a parameter. The function increments the value of $age by 1. You can do the same thing with a function in a module. You get my drift.
i'm using CSS3 accordion effect, and i want to detect if a hacker will
make a script to make a parallel request; ie:
i've a login form and a registration form in the same page, but only
one is visible because there is a CSS3: to access the page, the user
agent must be HTML5 compatible.
the tip i use is:
class Register(tornado.web.RequestHandler):
def post(self):
tt = self.get_argument("_xsrf") + str(time.time())
rtime = float(tt.replace(self.get_argument("_xsrf"), ""))
print rtime
class LoginHandler(BaseHandler):
def post(self):
tt = self.get_argument("_xsrf") + str(time.time())
ltime = float(tt.replace(self.get_argument("_xsrf"), ""))
print ltime
i've used the xsrf variable because it's unique for every user, to
avoid making the server think that the request is coming from the same
machine.
now what i want: how to make the difference between time values:
abs(ltime - rtime) ; mean, how do i access to rtime outside the class,
i just know how to access the value outside the method, i want to make
this operation to detect if the value is small, then the user is using
a script to make a parallel request to kill the server!
in other words (for general python users)
if i have:
class Product:
def info(self):
self.price = 1000
def show(self):
print self.price
>>> car = Product()
>>> car.info()
>>> car.show()
1000
but what if i've another
class User:
pass
then how do make a method that prints me the self.price, i've tried
inheritance, but got error: AttributeError: User instance has no
attribute 'price', so only methods are passed, not attributs?
It sounds like you need to understand Model objects and patterns that use persistant storage of data. tornado.web.RequestHandler and any object that you subclass from it only exists for the duration of your request. From when the URL is received on the server to when data is sent back to the browser via a self.write() or self.finish().
I would recommend you look at some of the Django or Flask tutorials for some basic ideas of how to build a MVC application in Python (There is no Tornado Tutorials that cover this that I know of).
I had made a daemon that used a very primitive form of ipc (telnet and send a String that had certain words in a certain order). I snapped out of it and am now using JSON to pass messages to a Yesod server. However, there were some things I really liked about my design, and I'm not sure what my choices are now.
Here's what I was doing:
buildManager :: Phase -> IO ()
buildManager phase = do
let buildSeq = findSeq phase
jid = JobID $ pack "8"
config = MkConfig $ Just jid
flip C.catch exceptionHandler $
runReaderT (sequence_ $ buildSeq <*> stages) config
-- ^^ I would really like to keep the above line of code, or something like it.
return ()
each function in buildSeq looked like this
foo :: Stage -> ReaderT Config IO ()
data Config = MkConfig (Either JobID Product) BaseDir JobMap
JobMap is a TMVar Map that tracks information about current jobs.
so now, what I have are Handlers, that all look like this
foo :: Handler RepJson
foo represents a command for my daemon, each handler may have to process a different JSON object.
What I would like to do is send one JSON object that represents success, and another JSON object that espresses information about some exception.
I would like foos helper function to be able to return an Either, but I'm not sure how I get that, plus the ability to terminate evaluation of my list of actions, buildSeq.
Here's the only choice I see
1) make sure exceptionHandler is in Handler. Put JobMap in the App record. Using getYesod alter the appropriate value in JobMap indicating details about the exception,
which can then be accessed by foo
Is there a better way?
What are my other choices?
Edit: For clarity, I will explain the role ofHandler RepJson. The server needs some way to accept commands such as build stop report. The client needs some way of knowing the results of these commands. I have chosen JSON as the medium with which the server and client communicate with each other. I'm using the Handler type just to manage the JSON in/out and nothing more.
Philosophically speaking, in the Haskell/Yesod world you want to pass the values forward, rather than return them backwards. So instead of having the handlers return a value, have them call forwards to the next step in the process, which may be to generate an exception.
Remember that you can bundle any amount of future actions into a single object, so you can pass a continuation object to your handlers and foos that basically tells them, "After you are done, run this blob of code." That way they can be void and return nothing.
I am using your api in my application for several functionalities namely,
-- Get directions to a destination
-- Get distances to several locations
-- Receive location addresses by locating with markers on the map
Application summary (Local-Deals-Now)
The application is a free application to allow local businesses located in Australia, to publish offers/deals related to their business. A user can create a business account (free of charge) and give the location of their business (using google maps api) and add offers/deals to the system by dates and times. Public users/consumers can then view these offers (Or search by distance, location, date, keywords etc:-.... The distance to the offers from the current user location is calculated using distancematrix api).
The application stated above is currently in its development stages and can be accessed via http://www.chuli.fclhosting.com/ . The problem faced was when using the distance matrix APIs to calculate distances to the business from the current logged in user location. I am using the API on the server side as follows
$curl = curl_init();
$uri = "http://maps.googleapis.com/maps/api/distancematrix/json?origins=".$originString."&destinations=".$destinationsString."&mode=driving&language&sensor=false";
die($uri);
$options = array(
CURLOPT_URL => $uri,
CURLOPT_HTTPHEADER => array(),
CURLOPT_COOKIE => '',
CURLOPT_RETURNTRANSFER => true
);
curl_setopt_array($curl, $options);
$responseBody = curl_exec($curl);
$responseHeaders = curl_getinfo($curl);
$data = $responseBody;
$responseContentType = $responseHeaders['content_type'];
if ((strpos($responseContentType, 'json') !== false) || (substr(html_entity_decode(trim($data)), 0, 1) == "{")) {
$jsonData = json_decode($data, true);
if (!is_null($jsonData)) {
$data =$jsonData;
}
}
for ($i = 0; $i < count($data['origin_addresses']); $i++) {
for ($j = 0; $j < count($data['destination_addresses']); $j++) {
$from = $data['origin_addresses'][$i];
$to = $data['destination_addresses'][$j];
if($data['rows'][$i]['elements'][$j]['status']=="OK" && $data['rows'][$i]['elements'][$j]['distance']['value']<= ($distance*1000)) {
$businessDistanceArray[$businesses[$j]->getId()] = $data['rows'][$i]['elements'][$j]['distance']['text'];
} else {
$businessDistanceArray[$businesses[$j]->getId()] = sfConfig::get('app_offer_not_applicable');
}
}
}
Since recently i have started receiving a warning/error stating OVER_QUERY_LIMIT. After googling on the problem (reference : https://developers.google.com/maps/documentation/javascript/usage) i have grasped that , the above problem occurs either if
--the request limit per day exceeds 25 000 map loads
OR
--by sending too many requests per second
The application as to my knowledge doe not fulfill either one of the above requirements but, still received the warning. Please let me know what i could do regarding this problem in order to overcome this error
Do i have to purchase a business license
OR
Since currently the application is a non profitable one should i apply for a Google Earth Outeach Grant
OR
Just continue without any changes
Here is a sample request sent:
http://maps.google.com/maps/api/distancematrix/json?origins=-25.198526444487587,133.20029780312507&destinations=-33.767636351789754,147.41606950000005|-33.905755787445024,151.15402221679688|-32.02195365825273,115.90862329062497|-37.79805567326585,145.01495173583987|-36.8839206402564,144.20859858437507|-31.362071510290537,117.42054612812501|-23.609959,143.72078599999998|-37.819317,145.12404889999993|-37.8186394,145.12360620000004|-37.816506,145.11867699999993|-37.815524,145.12131499999998|-30.708111,134.56642599999998&mode=driving&language&sensor=false
Please look into the info given above. Your feedback would thoroughly appreciated by us
Thanks in advance
There are two different limits like Dr.Molle says. You exceeded the daily limit which allows 2500 elements to be queried in a 24 hour period of time. When this happens you'll get continuous OVER_QUERY_LIMIT errors until that 24 hour time window expires and your quota refreshes.
The other limit to be on the lookout for is the short term limit of 100 elements every 10 seconds. When you exceed this limit you'll experience the OVER_QUERY_LIMIT error for a few seconds (i.e. until the 10 second window is up) and then you'll be able to use the service again.
If this is a user-facing application that loads the Google Maps JavaScript API and performs these distance matrix queries as a result of some user action the JavaScript Distance Matrix Service is definitely worth exploring. Limits from the JavaScript application are distributed among your website users, not the back-end server.
The Limit for the DistanceMatrix-Service is different:
100 elements per query.
100 elements per 10 seconds.
2 500 elements per 24 hour period.
So please check first, how many origins/destinations you request. When origins*destinations is more than 100, you should have reached the limit for a query and 10 seconds.
In this case(assuming you have only 1 origin), how many "businesses" do you request?
Edit:
As the request did not exceed the limit for queries/10 seconds, it may be that you repeatedly pass the daily-limit. Your request returns 12 elements, so you will exceed the limit with around 200 requests in 24hours, could this happen?
You might be flagged as passing one or other limit if you are sharing hosting with others who are also using the service. Sharing a server may well mean sharing the limits on the service.