Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
Yesterday, the Microsoft Translator service started returning 502 errors from the API after working for most of the day. No code changes on our side. It has been down for over 12 hours now.
Microsoft has a test website (https://datamarket.azure.com/dataset/explore/bing/microsofttranslator) that also returns the error, so I am pretty confident it is not our code. The error that site returns is:
The request resulted in a backend time out or backend error. The team
is investigating the issue. We are sorry for the inconvenience. (502)
The support site is horrible so not confident my emails are going to anyone. I have tried valid keys from two different accounts and still getting the errors. Anybody else having this issue??
Screen shot of error here
We are also having this issue. It appears that v1 of the translator has been discontinued (or is out of service). However v2 of the API works.
I am guessing you are using the form of authentication that only requires your MS datamarket account key (the one found here: https://datamarket.azure.com/account/keys).
With this form of authentication you use code similar to the following to do the translation:
Microsoft.TranslatorContainer xlator = new Microsoft.TranslatorContainer("https://api.datamarket.azure.com/Bing/MicrosoftTranslator/v1/Translate");
xlator.Credentials = new NetworkCredential("account key, "account key");
DataServiceQuery<Microsoft.Translation> xlateQry = xlator.Translate("translate me", "en", "fr");
Microsoft.Translation xlateResult = xlateQry.Execute().First();
translateOutput = xlateResult.Text;
The TranslatorContainer and Translation classes within the Microsoft namespace come from generated code that MS provided with the first version of the translation.
This is what we did and it quit working yesterday for us as well. It appears that MS has forcefully (and secretly AFAIK) discontinued this form of authentication in favor of their newer authentication scheme and API. It is worth noting that you are not able to access documentation for v1 of the API anymore when navigating from the MS translate API home pages.
However, I was able to follow the instructions for v2 of the API at these URLs to successfully create an ad-hoc HTTP translation request using my existing account:
Using the HTTP Interface
Obtaining an Access Token
When looking at "Obtaining an Access Token" go to the bottom PowerShell example for the specific URLs and remember to use POST to get the auth token and GET for the translate request. Also remember to use the url encoded parameters for the auth token request. I only say so, because those are the things that tripped me up when working through the example using PostMan in Chrome for the ad-hoc requests.
It may very well be that this transition was well documented, but for some poor sap like myself that is inheriting an application that used v1 of the translate API it sure looks like MS just left everyone using v1 out in the cold, because it is not obvious when navigating the translate API documentation that there are even 2 versions let alone that one will be discontinued.
Related
Our portal has been running on Liferay 6.2 for several years. We have many services that use HTML forms (usually written with Alloy UI in Freemarker) to allow users to submit requests. The server code is written in Java and uses the liferay portletrequest objects to return the submit form data.
However, recently these forms suddenly stopped working.
Specifically: if the form includes a file for uploading, then the ActionRequest object does not return any of the form fields as parameters the way it usually does (request.getparameter(paramtername) returns null instead of the string value that the user entered into the form). If the user does not include any files then it works normally.
This doesn't seem to be an issue with the forms or the java code as many forms who's code has not been touched in years suddenly stopped working. What's more this stopped working partway through a day in which we didn't make any changes to the application.
I'm struggling to understand what I'm seeing in the logs. The error messages that feel most promising look like:
Caused by: org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. Stream ended unexpectedly
at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:351)
at org.apache.commons.fileupload.portlet.PortletFileUpload.parseRequest(PortletFileUpload.java:109)
at org.springframework.web.portlet.multipart.CommonsPortletMultipartResolver.parseRequest(CommonsPortletMultipartResolver.java:151)
... 208 more
But I haven't been able to find anything that seems relevant. Another type of error that might be related looks like:
10:00:09,095 WARN [http-bio-8080-exec-272][FileImpl:422] Unable to extract text from Scan4.JPG
org.apache.tika.exception.TikaException: Unexpected RuntimeException from org.apache.tika.parser.jpeg.JpegParser#3e34efc2
We've been trying to track down the issue for days now, I'm desperate and out of ideas. Can anyone think of any possible reasons why files would not upload?
I'm trying to access Pubmed results via R using their API, but I consistently get fewer results than what the same query achieves when used with the web interface. By digging in the output I noticed that the problem lays in a different query translation between the two access methods.
I am using the rentrez package, but the results I get are the same also with other related rpackages, so I guess it's related to the API itself.
here's the code to reproduce the results:
install.packages('rentrez')
rentrez::entrez_search(db="pubmed", term = '((model OR models OR modeling OR network OR networks) AND (dissemination OR transmission OR spread OR diffusion) AND (nosocomial OR hospital OR "long-term-care" OR "long term care" OR "longterm care" OR "long-term care" OR "hospital acquired" OR "healtcare associated") AND (infection OR resistance OR resistant)) AND (2010[PDAT]:2020[PDAT])')$count
[1] 7157
The same query on https://pubmed.ncbi.nlm.nih.gov/ returns 9263 results.
Not sure if you still need this now. Just in case someone else has the same problem.
I had the same issue as you did and I found something might be useful from a GitHub issue.
It seems that the API service needs to be updated to match the new web service, but it's been a year now and still no promising announcement has been made by the official.
An alternative is provided by the easyPubMed author. Hope this is what you were looking for.
easyPubMed Issue
According to the OpenShift docs, the following should return a result:
curl -X GET https://openshift.redhat.com/broker/rest/api
However, any call to the API actually just returns a single whitespace - including those called with username & password.
I've confirmed the issue from several different machines accross the globe.
What might be the reason?
Inspired by a similar question regarding Twitter, I added .json at the end of the URL and it worked:
curl -X GET https://openshift.redhat.com/broker/rest/api.json
It's kind of disappointing that RedHat never replied to any inquiry regarding this issue.
The issue is logged here: https://bugzilla.redhat.com/show_bug.cgi?id=1324208
You can add yourself to the "cc" if you would like to be notified as the fix gets rolled out to production.
FYI, asking for help on a public forum and giving it only 4 hours for a response from the company seems a bit much. In the future it may be worth it for you to either check for open bugs, or to ask the company directly instead.
When I try to create a MySQL database on Microsoft Azure using pure REST request (PUT) to:
https://management.azure.com/subscriptions/<subscriptionid>/resourceGroups
/resource-<id>/providers/successbricks.cleardb/databases/<my-database>?
api-version=2014-04-01
I am getting this error:
HTTP STATUS CODE 400 Bad Request
Error message: 'Legal terms have not been accepted for this item on
this subscription. To accept legal terms, please go to the Azure
portal (http://go.microsoft.com/fwlink/?LinkId=534873) and configure
programmatic deployment for the Marketplace item or create it there
for the first time'
So I went to Microsoft Azure Portal, and I accepted the legal terms. I tried again, same error. I searched in almost the entire Azure Portal for some configuration about this and I found nothing.
Someone have the same problem?
Thanks.
you should not only accept the terms but follow the procedure of making the programmatic access possible. It should be on the license page.
Programmatic deployment only can be found in Virtual Machines MySQL, not in Data Storage MySQL Database. Try REST request after you enabled programmatic Deployment.
In addition, I successfully created a MySQL database using REST API without reproducing your question, but note that the request body need to be sent as well when using PUT request.
OK guys, found the solution. I don't know why, but if we change the JSON attribute { "plan.name": "Pay-As-You-Go" } to { "plan.name": "Free" } the database is created successfully.
I opened a support ticket to know which are the MySQL available plans. I will update the answer as soon as possible.
I am trying to fetch the contacts of the user who have an account in google apps marketplace. While fetching the contact i get the following error
com.google.gdata.util.ParseException: The prefix "atom" for element "atom:cc" is not bound.
at com.google.gdata.util.XmlParser.parse(XmlParser.java:695)|
at com.google.gdata.util.XmlParser.parse(XmlParser.java:568)|
at com.google.gdata.data.BaseFeed.parseAtom(BaseFeed.java:793)|
at com.google.gdata.wireformats.input.AtomDataParser.parse(AtomDataParser.java:68)|
at com.google.gdata.wireformats.input.AtomDataParser.parse(AtomDataParser.java:39)|
at com.google.gdata.wireformats.input.CharacterParser.parse(CharacterParser.java:)|
at com.google.gdata.wireformats.input.XmlInputParser.parse(XmlInputParser.java:52)|...
I am using Java client library to fetch the contacts. Can you please let me know is there an issue in the java client library? This issue is there for a long time and I badly need to find a solution for this...What should I do to make it work...Any help will be grateful..
Thanks,
VijayRaj
I got the same Problem, that you have with the Java Client, with the .NET client.
After contacting Google support, they told me that the Contacts arbitrary XML data which is in an Property element cannot be parsed within my version of GData .
However, there is a time intensive workaround, by deleting and recreating Contacts, but thats probably not what you are looking for, me either.
After switching to the Python implementation all works fine now.
Check out this Issue report Issue 361