Tracking API for Fedex and UPS [closed] - ups

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Is there any JavaScript API available for tracking Fedex and UPS packages?

I googled for something like this but couldn't find anything. Then I decided to do it server side in ROR.
Here it is how to get UPS and Fedex xml request and response from their test servers.
For Fedex:
track_no = '111111111111' # This is a test tracking number
# This XML Request body for fedex
xml_req =
"<TrackRequest xmlns='http://fedex.com/ws/track/v3'><WebAuthenticationDetail><UserCredential><Key>YOUR_ACC_KEY</Key>
<Password>YOUR_ACC_PASSWORD</Password></UserCredential></WebAuthenticationDetail><ClientDetail>
<AccountNumber>YOUR_ACC_NUMBER</AccountNumber><MeterNumber>YOUR_ACC_METER_NUMBER</MeterNumber></ClientDetail>
<TransactionDetail><CustomerTransactionId>ActiveShipping</CustomerTransactionId></TransactionDetail>
<Version><ServiceId>trck</ServiceId><Major>3</Major><Intermediate>0</Intermediate><Minor>0</Minor></Version>
<PackageIdentifier><Value>#{track_no}</Value><Type>TRACKING_NUMBER_OR_DOORTAG</Type></PackageIdentifier>
<IncludeDetailedScans>1</IncludeDetailedScans></TrackRequest>"
path = "https://gatewaybeta.fedex.com:443/xml"
#this url connects to the test server of fedex
# for live server url is:"https://gateway.fedex.com:443/xml"
url = URI.parse(path)
http = Net::HTTP.new(url.host,url.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
response = http.post(url.path, xml_req)
response_body = response.body
res = response_body.gsub(/<(\/)?.*?\:(.*?)>/, '<\1\2>')
hash = Hash.from_xml(res.to_s)
And that's it! You will get response in hash variable, I converted xml response in to Hash because we can easily use Hash object at our view to display response data.
For UPS:
track_no = '1Z12345E1512345676' # This is a test tracking number
# This XML Request body for UPS
xml_req =
'<?xml version="1.0"?><AccessRequest xml:lang="en-US"><AccessLicenseNumber>YOUR_ACC_LICENCE_NUMBER</AccessLicenseNumber>
<UserId>YOUR_ACC_USER_ID</UserId><Password>YOUR_ACC_PASSWORD</Password></AccessRequest>
<?xml version="1.0"?><TrackRequest xml:lang="en-US"><Request><TransactionReference>
<CustomerContext>QAST Track</CustomerContext><XpciVersion>1.0</XpciVersion></TransactionReference>
<RequestAction>Track</RequestAction><RequestOption>activity</RequestOption></Request>
<TrackingNumber>#{track_no}</TrackingNumber></TrackRequest>'
path = "https://www.ups.com/ups.app/xml/Track"
url = URI.parse(path)
http = Net::HTTP.new(url.host,url.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
response = http.post(url.path, xml_req)
response_body = response.body
hash = Hash.from_xml(response_body.to_s)
This hash variable contains the response of UPS Tracking Request in Hash format.

another easy way to do it: Just create a hyperlink with the following href
UPS:
http://wwwapps.ups.com/WebTracking/track?loc=en_US&track.x=Track&trackNums=put_tracking_number_here
FEDEX:
http://fedex.com/Tracking?action=track&language=english&cntry_code=us&tracknumbers=put_tracking_number_here
(not as elegant, but quick, easy and gets the job done!)

Or you can use the active_shipping gem for a nicer and cleaner way to track your packages for Fedex and UPS

Related

How should I go about parsing and using website data? I am using node js [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am using node js to create a bot that gets stats from r6.tracker.network, but when I load it in the data won't show up unlike when I have 'www.google.com' as the host. I don't really know where to go from here. I'm trying to use several different debugging methods, but I haven't found anything yet. There is no output.
function getWebpage(parameter){
const pathuser = '/profile/pc/' + parameter;
var http = require('http');
var options = {
host: 'r6.tracker.network',
path: '/'
}
var request = http.request(options, function (res) {
var data = '';
res.on('data', function (chunk) {
data += chunk;
});
res.on('end', function () {
console.log(data);
});
});
request.on('error', function (e) {
console.log(e.message);
});
request.end();
}
They are redirecting you to the https version of their site.
The key here is to add this logging:
console.log(res.statusCode);
console.log(res.headers.location);
When you do that, you will see this:
301
https://r6.tracker.network/
In other words, they want you to use the https version of their site and they are redirecting you to do that. You won't get the content of the web page from the http URL. You have to use https.
Coding takeaways here:
Always look at the http status and only proceed normally if you get a 2xx response.
Be prepared for 3xx redirect responses.
FYI, if you use a more modern library such as got() or any of the other libraries listed here, they will all follow redirects for you automatically and they will gather the full response for you automatically too. And, they use the more modern promise-based method of asynchronous programming. I'd really suggest you pick one of those libraries for your outbound http requests.

Get email using Hunter IO API with Google Apps Script [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
Hello fellow developers,
I am trying to get the email count for any website using Hunter IO free API with Google Apps Script.
Hunter IO API reference : https://hunter.io/api/v2/docs#email-count
Here is my code.
function checkDomain() {
var domain = 'stripe.com';
var url = 'https://api.hunter.io/v2/email-count?domain='+domain;
var response = UrlFetchApp.fetch(domain);
var result = response.getContentText();
Logger.log(JSON.parse(result)); // <-- Line 56
}
I get this Error : SyntaxError: Unexpected token: < (line 56, file "Code")
Can anyone please help me understand this error and tell me how to retrieve a JSON response from this Hunter IO.
You need to pass url and not domain variables.
var response = UrlFetchApp.fetch(url);
I would also recommend the use of string constructors. Its increase readability and understanding of the code.
var domain = 'stripe.com';
var template = 'https://api.hunter.io/v2/email-count?domain=%s';
var url = Utilities.formatString(template, domain);

How do I get data from a web API? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm pretty basic at Perl, and I need to make a project for my university.
I want to download data from certain link, it is JSON data, so I know I have to use JSON::Parse module from CPAN.
But how to download content of link to my Perl variable? Should I use LWP get()?
Aren't you supposed to be learning Perl if it's a university project?
Anyway, your program will look something like this. It uses the LWP::Simple module to fetch the JSON data, and then JSON::Parse to process it into a Perl data structure
I've used printed the author value from each item of the array, as requested in your comment
use strict;
use warnings 'all';
use LWP::Simple 'get';
use JSON::Parse 'parse_json';
use constant URL => 'http://a.wykop.pl/observatory/entries/appkey,dJ4w7fXYpL';
my $json = get URL or die "Unable to get JSON data";
my $data = parse_json($json);
print "Authors:\n";
print " $_->{author}\n" for #$data;
output
Authors:
Snurq
AferaZaAfera
Devest
igorsped
Matt23
labla
poprawnie-niepoprawny
Parkero
Speed666
Xune
Gasior9
mikolajeq
Aztek2201
blogerbezbloga
Pan_wons
PanKaczucha
NieznanyAleAmbitny
dzika_kaczka_bez_dzioba
ilili
Bager
bmbcz01
hydrocyfolumpus
acarter
Doctor_Manhattan
strumienzgor

Import JSON document array / object to a Meteor Collection with correct _ids [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
What is the best way to import a 15,000 document json file of a format (but with a +30 foo fields)
[{"foo1":"foo1data1", "foo2":"foo2data1"}, {"foo1":"foo1data2", "foo2":"foo2data2"}...
{"foo1":"foo1dataN", "foo2":"foo2dataN"}])
to a Meteor collection?
I tried with mongoimport but that created ObjectID's instead of _id's and I could not make it work without autopublish, although other collections, created with Meteor, work just fine on client side.
Supposing the file is located on the server under pathToFile you can do something like this:
var fs = Npm.require("fs");
var Fiber = Npm.require("fibers");
fs.readFile(pathToFile, 'utf8', function (err, data) {
// handle error if there is some
data = JSON.parse(data);
Fiber(function () {
_.each(data, function (document) {
SomeMeteorCollection.insert(document);
});
}).run();
});
Please note that Fiber wrapper is required if you want call any meteor specific routines, for example collections API, within some nodejs asynchronous code.

Get a warning if an expected schedule report email hasnt arrived [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I (like most tech admins I guess) have quite a lot of status infos from scheduled services in my inbox. However when one service email fails there's obviously no email sent. So I simply want a service looking at my inbox saying "Hey this service did not send an email report yesterday - somethings wrong!".
This one should be solved somewhere I guess. Perhaps Gmail (or some other email provider) has a service of this kind, that would be great.
Wouldn't it be a better option to have a centralized monitoring solution like Nagios that you configure in such way that it only send out notifications when a service misses its heartbeat, reaches highwatermarks, run out of fuel? And then off course of a second monitoring solution that monitors the main monitoring solution....
http://www.nagios.org/documentation
I'm not aware of any service you describe but a manual routine might go like this:
Have a folder/tag structure like this:
Services\Hourly-[NumberOfServices] (or add a folder per service)
Services\Daily-[NumberOfServicves]
Services\Weekly-[NumberOfServicves]
Services\Monthly-[NumberOfServicves]
Have rules for incoming mail to filter each specific service notification and move it to the right folder based on its expected timing.
Wakeup every hour and check if there are unread messages in your Hourly folder. The number of unread should be the same as the NumberOfServices mentioned in the folder. Read/Process them and make sure to all mark them as Read. Any service that didn't e-mailed get's spotted easily.
Wakeup at 0:00 and check if there are unread messages in your Daily folder. etc etc..
Wakeup at 0:00 and Saturday and check if there are unread messages in your Weekly folder. etc.....
Wakeup at 0:00 on the first of the month and check if there are unread messages in your Weekly folder. etc etc etc...
My advice would be to cut down the noise generated by the services.
If you still feel you need a service I can only provide a very very basic .Net implementation roughly based on the above process and works with gmail...
This is also portable to powershell...
static void Main(string[] args)
{
var resolver = new XmlUrlResolver
{
Credentials = new NetworkCredential("yourgoolgeaccount", "yourpassword")
};
var settings = new XmlReaderSettings();
settings.XmlResolver = resolver;
var xr = XmlReader
.Create("https://mail.google.com/mail/feed/atom/[name of your filter]"
, settings);
var navigator = new XPathDocument(xr).CreateNavigator();
var ns = new XmlNamespaceManager(new NameTable());
ns.AddNamespace("fd", "http://purl.org/atom/ns#");
var fullcountNode = navigator.SelectSingleNode(
"/fd:feed/fd:fullcount"
, ns);
Console.WriteLine(fullcountNode.Value);
int fullcount = Int32.Parse(fullcountNode.Value);
int expectCount = 10;
if (expectCount > fullcount)
{
Console.WriteLine("*** NOT EVERY ONE REPORTED BACK");
}
}
You mentioned Gmail, so you may be interested in googlecl, which gives you command-line controls for things like Google Calendar and Docs. Unfortunately they do not yet support Gmail, but if your long-term preference is to use a Gmail account as the hub of your status reports, then googlecl may be your best option.
In the short run, you can try out googlecl right now using the commands for Calendar, Blogger, or Docs, all of which are already supported. For example, these commands add events to Google Calendar:
google calendar add --cal server1 "I'm still alive at 13:45 today"
google calendar add "Server 1 is still alive at 2011-02-08 19:43"
...and these commands query the calendar:
google calendar list --fields title,when,where --cal "commitments"
google calendar list -q party --cal ".*"
Come to think of it, you may even find that Calendar, Blogger, or Docs are a more appropriate place than Gmail for tracking status updates. For example, a spreadsheet or calendar format should make it easier to generate a graphical representation of when a given service was up or down.
You still need to write a little program which uses googlecl to query the calendar (or blog, or docs, or whatever), but once you have simple command lines at your disposal, the rest should be pretty straightforward. Here's a link to further information about googlecl:
http://code.google.com/p/googlecl/
If you really want to use Gmail, and use it right now, they offer an IMAP interface. Using IMAP, you can perform numerous simple operations, such as determining if a message exists which contains a specified subject line. Here's one good place to learn about the details:
http://mail.google.com/support/bin/answer.py?hl=en&answer=75725
Here's a quick example that uses IMAP and Python to list the ten most-recent emails which have a given Gmail "Label":
import getpass, imaplib
# These gmail_* utilties are from https://github.com/drewbuschhorn/gmail_imap
import gmail_mailboxes, gmail_messages, gmail_message
# Update these next lines manually, or turn them into parms or somesuch.
gmail_account_name = "your_user_name#gmail.com" # Your full gmail address.
mailbox_name = "StatusReports" # Use Gmail "labels" to tag the relevant msgs.
class gmail_imap:
def __init__ (self, username, password):
self.imap_server = imaplib.IMAP4_SSL("imap.gmail.com",993)
self.username = username
self.password = password
self.loggedIn = False
self.mailboxes = gmail_mailboxes.gmail_mailboxes(self)
self.messages = gmail_messages.gmail_messages(self)
def login (self):
self.imap_server.login(self.username,self.password)
self.loggedIn = True
def logout (self):
self.imap_server.close()
self.imap_server.logout()
self.loggedIn = False
# Right now this prints a summary of the most-recent ten (or so) messages
# which have been labelled in Gmail with the string found in mailbox_name.
# It won't work unless you've used Gmail settings to allow IMAP access.
if __name__ == '__main__':
gmail = gmail_imap(gmail_account_name,getpass.getpass())
gmail.messages.process(mailbox_name)
for next in gmail.messages:
message = gmail.messages.getMessage(next.uid)
# This is a good point in the code to insert some kind of search
# of gmail.messages. Instead of unconditionally printing every
# entry (which is what the code below does), issue some sort of
# warning if the expected email (message.From and message.Subject)
# did not arrive within the expected time frame (message.date).
print message.date, message.From, message.Subject
gmail.logout()
As noted in the code comments, you could adapt it to issue some sort of warning if the most-recent messages in that mailbox do not contain an expected message. Then just run the Python program once per day (or whatever time period you require) to see if the expected email message was never received.