Global Payments Hosted Payment Page Integration - sandbox and prod POSTs back to merchantUrl delays - realex-payments-api

I'm running our Hosted Payment Page integration the same as the official js lib:
https://github.com/globalpayments/rxp-js/blob/master/examples/hpp/process-a-payment-embedded-autoload-callback.html
All is good except this bit is very slow for the response to come back to our side with the approved/failed transaction:
https://github.com/globalpayments/rxp-js/blob/9909985b96ab5ed945614affad5f3739827f956b/examples/hpp/process-a-payment-embedded-autoload-callback.html#L16
e.g. The form gets presented, you enter your card details and click submit (on HPP) and then the 3D secure shows, does it's thing but around 4 mins later the result come back within the answer line (the above link, line 16). I'm not sure why it's so slow. Sandbox and Production are the same.
I'm opening a support case anyway, but if anyone has any ideas.
Thanks,
Gavin.

Was the firewall on the box blocking https / port 443 from the public. Obviously the libs do something. All was instant when off.

Related

How to remove all data chrome stores for a url?

TL;DR I'd like to make chrome's state as though it had never, ever visited a certain url before.
Longer version:
I'm working on an application, and have a complicated problem regarding XSS vulnerabilities, which could be caused by the browser 'remembering' something about a previous session which could cause nonces to not match. The upshot is that I need to be absolutely sure that when I visit the app url that chrome hasn't 'remembered' anything about it from any previous session(s).
Here's what I've tried:
Firstly, visiting: chrome://settings/cookies/detail?site=example.com and deleting all the cookies
Secondly, visiting: chrome://settings/clearBrowserData and deleting everything (unfortunately, this doesn't seem to be possible for one url at a time?)
I can prove that chrome has not completely 'forgotten' the site. The proof is complicated, but basically if I place a different app (with different flavicon) at the url, visit the url, then close out that tab, then complete the steps above to clear browser data and cookies (at this point chrome should have forgotten everything). Yet when I put a different app at the same url and visit the url, chrome uses the old app's flavicon, which (I think) proves that it hasn't completely forgotten everything it knew about the url!
So, that's the long version. But, the TL;DR is to simply make it as though chrome had never visited a site (preferably without altering data stored for other sites, or doing anything extreme like completely uninstalling/reinstalling)
A third attempt
To empty cache and hard reload, press cmd + opt + j to bring up the developer console, then right click on refresh and select 'Empty Cache and Hard Reload'. Yet the old flavicon still remains, indicating that not all info from that site was removed.
After about 2 hours, I figured the following techniques to try to remove the flavicon, but even after all of the following steps, as the flavicon from a previous app still remains in Chrome's 'memory'!
Do the first two steps from the question:
visit: chrome://settings/cookies/detail?site=example.com and delete all the cookies (replace example.com with the url in question)
visit: chrome://settings/clearBrowserData and deleting everything (would be great to know how to do this for a single site)
Right click on the tiny icon to the immediate left of the url (it will be a lock if using https, or the letter 'i' if using http).
Go into each of categories listed (e.g. 'Cookies', 'Site Settings' etc) and delete them all
Note
I didn't find a solution for removing all data from chrome, however, I found you can start a completely isolated chrome session with these instructions

NetSuite SuiteScript 2.0 Integration with external hook

I need to setup EPL2 label printing from Netsuite. Unfortunately the company this is for is very small and they don't have much money to spend, hence they cannot buy a $1000 label printing solution.
The current system uses a linux server that then sends a file to one of the CUPS print server queues using the linux cat command. From there it goes to a Intel NetportExpress 10/100 Print Server and then to the Argox V1000+ label printer. This is via a corporate network ip address.
Instead I started looking at some cheap options:
Popup a browser window with content type text/plain and use a suitelet to populate that browser window with the EPL2 label printer codes. Then open a print dialog window so that the user can print to the label printer driver. This requires installation of the label printer driver for all users. Sadly I could not get this to print a label.
Integration from Netsuite via a Restlet to an external python application (on Linux) that can then perform the linux cat command needed to print the label. The Restlet works nice, but unfortunately there does not seem to be a way to have some sort of hook that fires when a new label custom record arrives. Therefore I have to keep on polling the Restlet from Python every 2 seconds to see if a new label is waiting to be printed. I started running this about an hour ago and so far I have made about 2500 requests without errors. My concurrency limit is 5 and I'm using 2 so that seems ok. The script does very little so I don't think there will be size limit issues. The problem is just that I wonder whether NetSuite will eventually terminate my script for doing so many requests. Not sure whether there is such a governance issue, but can't imagine that they won't eventually stop that sort of thing.
Use the http module to send data in an ajax type manner. This should be able to pickup when new data arrives instead of having to poll (not sure). The problem with this is that I assume I will need a static IP address which is sadly an expensive option.
Use Netsuite SOAP web services which might have a hook instead of polling (not sure). I think this would not be free (like Restlets) either.
So my question is whether there is a better option that I'm missing or what would you recommend. Also would I hit some sort of governance limit if I poll every 2 seconds with option 2?
Update: The polling mysteriously stopped working after 7395 requests and about 3 hours. It did not return an error that I'm aware of. The rejected requests on Integration Governance shows 0.
I used to do the emailing thing quite a bit and it works pretty well. Volume may be an issue.
Another thing to do is just get a static IP address with something like ngrok.
ngrok runs on linux/mac/windows so you'd be able to write an app that listens on a particular port. Netsuite would send an https post to that app at (for instance) https://printing.mycompany.ngrok.io and the app would handle local printing.
I believe ngrok runs about $US60/year.
the app can verify identity with some sort of timestamp and hash so that if someone does get the https address they couldn't easily use all your paper or cause a DoS situation.
We got bamboozled by a printer vender (Zebra) before we found out that we could HTTP post to most printers using PRINTER_IP:9100 and just sending the RAW ZPL/EPL as the body.
Look into: IPP enabled printers. most are these days. saves you 1000's in longrun if you have a large warehouse operation like we do
Instead of polling I would have NetSuite initiate the connection in an afterSubmit User Event script.
I've automated label printing by having NetSuite email attachments to a dedicated mail box which is monitored by a Linux server. My setup is documented here:
https://gist.github.com/michoelchaikin/80af08856144d340b335d69aa383dbe7

WebRTC video works locally but not remotely

There's quite a bit of code involved, so I threw what I had into a temporary github repo:
https://github.com/stevendesu/webrtc-failure
I'm learning WebRTC and long-term want to do some fancy stuff, but for now I'm starting simple: send a video from one computer to another. Unfortunately this failed. Here's what I've set up so far...
On a development server I own I installed coturn to act as a STUN/TURN server
I created two pages: broadcast.html and watch.html. The former creates a media stream and (using Socket.IO) sends the connection details to a signaling server. The latter gets the connection details from the signaling server and attempts to watch the stream
After running npm install you can npm start to run the server and access it at localhost:2017.
So here's what works:
After opening broadcast.html you are prompted for a broadcast ID. You can type anything here, but I usually just do an incrementing number - so I start with "1"
After entering a broadcast ID, and a short delay, you see your webcam feed on the screen. Looking at the console you can see several messages have been exchanged with the Socket.IO server
If you open watch.html in a new tab, you are prompted for a broadcast ID. Enter the same ID as before
After entering the ID, and a short delay, you will see your webcam feed on the new tab. Looking at the console you can see that the earlier ICE candidates and offer details were sent by the Socket.IO server and the watcher responded with an answer
Returning to the broadcast tab, you can verify that the answer was received and processed. A connection has now been established
For bonus points, the pc variable is in the global scope (PeerConnection) so in the console you can establish an RTCDataChannel and send messages between the tabs directly (bypassing the Socket.IO server)
Here's what doesn't work:
For now (and I know why this is, so it's not a concern) only the FIRST person to enter an ID into the watch.html page can actually see the broadcast. It's not "broadcast", it's just peer-to-peer, and once one connection has been established then future connections fail
My issue: if I open watch.html from a different computer or device (either on the same network or a different network) then the video never plays
In the latter case if you look at the console you'll see the offer and ice candidates are delivered to the watcher, the watcher generates an answer, the answer is sent back to the broadcaster, and the watcher sees a media stream added to the PeerConnection. This media stream is converted to a blob URL and assigned as the source of the video element.
I'm at a point where I don't know how to progress. I don't know why the video isn't showing up.
your watch.js does not emit ice candidates. That is one possible cause. If that doesn't help you can use chrome://webrtc-internals to figure out what is going on -- see here for a description of how to interpret what is going on.
You might also want to look into modernizing your code. https://webrtc.github.io/samples/src/content/peerconnection/pc1/ is a fairly simple example of modern WebRTC code using promises and other things like using srcObject instead of the deprecated URL.createObjectURL.

TFS 2015 Code Viewer Not Working in Google Chrome

I found the following issue here in stackoverflow however cannot comment as yet. I have a similar issue and wonder if there is anyone out there that has solved it.
https://stackoverflow.com/questions/40917501/tfs-2015-web-portal-code-viewer-not-working#
I am encountering similar here. In house TFS 2015, can't view code in the web portal using Google Chrome however IE is fine. I, however, am not using HTTPS so may be experiencing something slightly different.
When I do try to view a file in Chrome, the window where the code listing should be is simply blank. I did note too that the button for creating a new build definition appears to be indicating a broken image link.
This has not always been an issue. Around 4 months ago I could get the code view fine in Chrome and, to my knowledge as I have no access to the servers, nothing has changed apart from Chrome updates.
I've tried getting to previous versions of Chrome to no avail, though I wouldn't know which version I was on when this did work.
Interestingly, I have one or two .MD files around and these display perfectly well. They are simple text files. However when saved with .TXT extension (or anything else I've tried), they do not show. Curious.
Update
As you will see from the screenshot below, when selection on a file has been made, in this case a .SQL file, where I would expect the view to populate nothing at all appears.
As for the F12, I do get 5 of these:
Failed to load resource: net::ERR_CONNECTION_REFUSED
plus associated paths of course. We use Webroot internally here which has recently dropped in a Chrome extension however even when Webroot is disabled in its entirety (including removal of extension) I get the same behaviour.
All other Chrome extensions have been removed too at varying times to try to give a clean browser.
I have no other pop up blockers, ad blockers, etc installed on the workstation.
Problem solved thanks to the F12 key suggestion.
After some grovelling I was granted domain admin privs to have a dig around everything. It turns out that TFS was installed on ServerA with a URL port of 8080, this I knew from the original install and obviously the path I follow to get to my TFS web interface. What had also been done subsequently, with no consultation of the Dev user group, was that a second TFS application tier had been installed on ServerB, the port here was 8088.
I had not noticed the difference in path initially, assuming it was Chrome or workstation related. Anyway, I altered the port on ServerB to 8080 and everything jumped into life. I should not have made assumptions and should have paid more attention to the path in the error!
It seems the second application tier was set up on a non-production environment to allow senior Dev users access to the TFS Management Console rather than allowing them access to the original app tier which was on a production box. Our IT Operations just forgot to tell anyone.
Try to update your chrome to latest version of (55.0.2883.87 m (64-bit)).
Also clear the cache of chrome. I have also encountered similar issues. The solution is clear cache and connect to the web portal use another ID, then connect back use the original ID. I have no idea which one solved the problem. You could try both.
This problem should only be an individual phenomenon, since TFS2015 has been released for a long time.

ibm connections adding a library to comunity

Im trying to make connections 4.5 working with content manager. I guess im quite far away from start finally but there are many things i need to fix.
Sometimes my widgets just doesnt load. It says cannot load widgets-config.xml
when i restart deployment and appsrv everything looks good.
My biggest problem is to add library to community. Because i want to see how workflow works and the id like to create linked library of this. This is what i get when i try to add library widget to the community (linked library widget works well)
CLFWZ0004E: Event 'widget.added' sent to remote lifecycle handler at https://conserv.egroup.local/dm/atom/communities/feed returned bad response: 403 - Forbidden
I guess there is som problem with https access. Can anybody of you guys ever faced this problem? Some hints?
UPDATE-1
After accessing that page from it gives me this :
<td:error>
<td:errorCode>UnsupportedOperation</td:errorCode>
<td:errorMessage>CQL5602: The attempted operation, GET, is not allowed on the resource /communities/feed.
Contact your administrator and provide the incident ID: 1381320497551.
The administrator should forward this information to the application owner.
</td:errorMessage>
</td:error>
So i guess maybe there can be som problem with proxy policies. I tried to make some changes with changes default policy url to *. But still no progress..
Hints?