Our organization is using Bim360 Docs. I'm writing a service that should stay constantly updated with any changes to documents/folders in the project. I'm using WebHook API to achieve this.
Everything works fine if service is always running, but if it would crash or there would be any maintenance then it would inevitably miss some webhook calls and would never know that some file/folder was updated, moved or deleted.
What I'm looking for is a way to get all changes in the project files/folders that happened while my service was offline. Something like GET projects/:project_id/changes?sinceTs=1588764730.
If there is no such method then during a "cold start" I would need to walk through project hierarchy comparing versions (or mtime) of the files/folders to find what has changed. This is doable but could take a lot of time, as our typical project contains ~6k folders.
If there is no such method then during a "cold start" I would need to walk through project hierarchy
Optimally it'd best to consider set up a cluster of redundant instances so you could update/maintain each one of them and still have the ability to receive callbacks available as a whole or at least have a stand-in service to receive and persist (temporarily) the callbacks for your app to come back online and consume
I'd suggest to have an always-on gateway (such as a FaaS on AWS/Azure etc) with the availability to either trap the callbacks when your app is down for maintenance or redirect them to your stand-in.
Related
I have two or three sets of Azure credentials for Work, Work Admin, and Personal. This morning, I clicked the wrong login credential during an interactive login while doing some local development. My local dev app now has an identity of me#company.com, when I need my identity to actually be me#admin.com. Because I clicked the wrong identity, my application immediately starts getting obvious authorization errors.
My implementation is pretty naive right now, and I'm relying on the Python Azure SDK to realize when it needs to be logged in, and to perform that login without any explicit code on my end. This has worked great so far, being able to do interactive login, while using the Azure-provided creds when deployed.
How can I get my local dev application to forget the identity that it has and prompt me to perform a new interactive login?
Things I've tried:
Turning the app off and back on again. The credentials are cached somewhere, I gather, and rebooting the app is ineffective.
Scouring Azure docs. I may not know the magic word, and as a consequence many search results have to do with authentication for users logging into my app, which isn't relevant.
az logout did not appear to change whatever cache my app is using for it's credential token.
Switching python virtual environments. I thought perhaps the credential would be stored in a place specific to this instance of the azure-sdk library, but no dice.
Scouring the azure.identity python package. I gather this package may be involved, but don't see how I can find and destroy the credential cache, or any out way to log out.
Deleting ~/.azure. The python code continued to use the same credential it had prior. ~/.azure must be for the az cli, not the SDK.
Found it! The AzureML SDK appears to be storing auth credentials in ~/.azureml/auth/.
Deleting the ~/.azureml directory (which didn't seem to have anything else in it anyway) did the trick.
Python garbage collector provides access to unreachable objects that the collector found but cannot free. Since the collector supplements the reference counting already used in Python, you can disable the collector if you are sure your program does not create reference cycles. Refer here
You can use the weak reference to an object is not enough to keep the object alive: when the only remaining references to a referent are weak references, garbage collection is free to destroy the referent and reuse its memory for something else. However, until the object is destroyed the weak reference may return the object even if there are no strong references to it.
Refer here for using Weak Reference reference 1 & reference 2
I am trying to integrate our third party application with NetSuite. I want to be able to import sales invoice details generated from our third party system (which uses REST API) into the NetSuite invoice form.
The frequency of import is not too crucial- an immediate import will be ideal, but sending data once a day is fine as well.
I want to know what I have to use to do this API integration - SuiteTalk, RESTlet or Suitelet.
I am completely new to this topic and after a few days of research, I learned that there are 3 options for an API integration with netsuite (Suitelets, restlets and suitetalk which comprises REST and SOAP based web services). I also learned that there are scheduled scripts and user events, but I'm not too clear on the idea.
I need some help identifying which integration option I should choose.
Any and all information about netsuite API integration is appreciated!
I would avoid REST/SOAP. SOAP is outdated, and REST is incomplete and difficult to use.
Suitelet's are for when you want to present your own custom UI to frontend users, like a special new kind of custom form not relevant to any particular record. Probably not what you want.
What you probably want is to design a restlet. A restlet is a way for you to setup your own custom url inside NetSuite that your program can talk to from outside NetSuite. Like a webpage. You can pass in data to the restlet either inside the URL, or inside the body of an HTTP request (e.g. like a JSON object), and you can get data back out from the body of the HTTP response.
A restlet is a part of SuiteTalk. The method of authenticating a restlet is the same for the method of authenticating a request to the REST API. So, learning about SuiteTalk is helpful. The code you use to write the restlet, SuiteScript, is the same kind of code used to write suitelets and other kinds of scripts.
So you will want to learn about SuiteTalk, and then, in particular, SuiteTalk restlets.
this is a really subjective issue.
It used to be that SOAP/SuiteTalk was a little easier in terms of infrastructure and since Netsuite's offerings are ever changing the REST/SuiteTalk might fill this space in the future.
Since Netsuite deprecated the Full Access role setting up integrations almost always involves the integrator having to provide a permissions spec. The easiest way to do that is via a Bundle. For token based authentication (TBA) there also needs to be an integration record from which you need Consumer Id and Secret Tokens.
So as of this writing the set up for SOAP/SuiteTalk and RESTLets is roughly the same. The easiest way to communicate these is with a bundle so if you are a Netsuite dev with a dev account you can set these up in a bundle and have your customer import them.
So equal so far but differences:
SOAP/Suitetalk is slow. IMO not suiteable for an interactive interface
SOAP/Suitetalk the code is all in your external app so changes to the code don't require any changes in the target account.
RESTlets can be pretty speedy. I've used these for client interactions.
Updates require re-loading your bundle or overwriting your bundle files in the target account (with the resulting havoc if an admin refreshes the bundle)
RESTlets give you access to the features of the account on which you are running so that code can run appropriate chunks For instance features such as matrix items, multi-location inventory, one-world, pick/pack/ship, volume pricing, multi-currency will all change the data model of the account your code is running against. RESTlets can detect which features are enabled; SOAP/SuiteTalk cannot.
So really the only advantage at this point that I see for SOAP/Suitetalk is that code updates don't require access to the target account.
Who is making the changes? If it is your NetSuite developers, then your options are SUITELET or RESTLET.
If its your third-party application team, they own the code and the process and do all their work sitting outside of NetSuite - your option is SUITETALK/SOAP. Of course, they need to know something about NetSuite, but your business analyst would be sufficient to support them. As of 2020.1+, there is also support for native REST APIs in addition to SOAP in case you still want to use REST, but not write your own RESTLETS.
As the above comments mention, Suitetalk does perform a little slower than calling RESTLETS. So that maybe one of the deciding factors.
You may consider SUITELETs for integration only if you want to bypass all authentication schemes, by setting the suitelet as public. Highly inadvisable though.
If the third-party application supports REST APIs, you could call them directly from within NetSuite - either from user events or from scheduled scripts.
You can also consider iPAAS platforms like Dell Boomi, Celigo, Jitterbit, etc. These are general-purpose integration platforms, and make connecting one platform to another easy, with minimal coding. If your Company is already invested in these iPAAS platforms for other enterprise applications, then the choice is that much simpler.
I have been looking everywhere for a solution to this problem.
At my work, we are trying to integrate Maximo with another system via the other systems REST API (which returns JSON responses). I am able to make this integration work on a small scale, however this API is taking upwards of 5 seconds to respond per request. Currently, I have defined this system as a JSON Resource, and I copy daily "snapshots" of the non-persistent data to a persistent attribute using an automation script. The requests all run in a sequence - which works slowly for 5 assets in testing, and will definitely not scale to 1000's of calls a day.
Assume that the API of the external system cannot be modified in any way... Is there a way to query this API in a non-blocking way? I'd imagine that if I could send a request, and send the next, etc. without needing to wait for a reply to proceed, this would solve the problem.
I looked into Invocation and Publishing Channels, and also Enterprise Services, and it seems like Enterprise Services along with JMS Queues might be what I need, however documentation says that these only support queuing incoming data... and I can't see how this solves my problem.
Any help? I am completely stuck on this.
Thank you!
I had to do something that sounds similar, once. I tried JSON Resources, but they didn't work for me. I ended up using the examples in Maximo 7.6 Scripting Features to do it. The first code sample in that document is a library script for making HTTP/S calls using out-of-the-Maximo-box libraries, and other examples in that document use IBM's JSONObject and JSONArray classes (also available out of the Maximo box) to parse responses.
To get things going concurrently / multithreaded, you could configure a cron task to call your automation script, and configure multiple instances on various schedules to call the same one and use the args or some other mechanism to prevent collisions.
I have setup a local GraphHopper service on a local server and it works as advertised. I can pass it a set of points via rest, and get back a happy little JSON file of directions and an encoded route. Of course, "out of the box" the routing API is missing a toggle available in the paid Routing API service via graphhopper.com, and that is the optimize=true/false flag. This little addition will not only route between your passed points, but when set to true will also re-order them into the most optimal route.
Now I imagine to get this additional functionality one needs to somehow "bake in" some level of jsprit code. My level of understanding of Java and compiling code however is woefully inadequate here. Looking over numerous jsprit sites the best help I can find is "look at the source code for examples". Is there any sort of guide for building jsprit into the standard graphhopper JAR file, or does anyone know of any pre-built JARs out there with this functionality already built in? it's probably a long shot, but any help would be appreciated.
As our systems grow, there are more and more servers and services (different types and multiple instances of the same type that require minor config changes). We are looking for a "cetralized configuration" solution, preferably existing and nothing we need to develop from scrtach.
The idea is something like, service goes up, it knows a single piece of data (its type+location+version+serviceID or something like that) and contacts some central service that will give it its proper config (file, object or whatever).
If the service that goes online can't find the config service it will either use a cached config or refuse to initialize (behavior should probably be specified in the startup parameters it's getting from whom or whatever is bringing it online)
The config service should be highly avaiable i.e. a cluster of servers (ZooKeeper keeps sounding like a perfect candidate)
The service should preferably support the concept of inheritence, allowing a global configuration file for the type of service and then specific overrides or extensions for each instance of the service by its ID. Also, it should support something like config versioning, allowing to keep different configurations of the same service type for different versions since we want to rely more on more on side by side rollout of services.
The other side of the equation is that there is a config admin tool that connects to the same centralized config service, and can review and update all the configurations based on the requirements above.
I know that if I modify the core requirement from serivce pulling config data to having the data pushed to it I can use something like puppet or chef to manage everything. I have to be honest, I have little experience with these two systems (our IT team has more), but from my investigations I can say it seemed they are NOT the right tools for this job.
Are there any systems similar to the one I describe above that anyone has integrated with?
I've only had experience with home grown solutions so my answer may not solve your issue but may help someone else. We've utilized web servers and SVN robots quite successfully for configuration management. This solution would not mean that you would have to "develop from scratch" but is not a turn-key solution either.
We had multiple web-servers each refreshing its configurations from a SVN repository at a synchronized minute basis. The clients would make requests of the servers with the /type=...&location=...&version=... type of HTTP arguments. Those values could then be used in the views when necessary to customize the configurations. We did this both with Spring XML files that were being reloaded live and standard field=value property files.
Our system was pull only although we could trigger a pull via JMX If necessary.
Hope this helps somewhat.
Config4* (of which I am the maintainer) can provide you with most of the capabilities you are looking for out-of-the-box, and I suspect you could easily build the remaining capabilities on top of it.
Read Chapters 2 and 3 of the "Getting Started" manual to get a feel for Config4*'s capabilities (don't worry, they are very short chapters). Doing that should help you decide how well Config4* meets your needs.
You can find links to PDF and HTML versions of the manuals near the end of the main page of the Config4* website.