google maps api : internal server error when inserting a feature - google-maps

I try to insert features on a custom google map : i use the sample code from the doc
but i get a ServiceException (Internal server error) when i call the
service's insert method.
Here is what i do :
I create a map and get the resulting MapEntry object :
myMapEntry = (MapEntry) service.insert(mapUrl, myEntry);
This works fine : i can see the map i created in "my maps" on google.
I use the feed url from the map to insert a feature :
final URL featureEditUrl =
myMapEntry.getFeatureFeedUrl();
I create a kml string using the sample from the doc :
String kmlStr = "< Placemark xmlns=\"http://www.opengis.net/kml/2.2\">"
+ "<name>Aunt Joanas Ice Cream Shop</name>"
+ "<Point>"
+ "<coordinates>-87.74613826475604,41.90504663195118,0</
coordinates>"
+ "</Point></Placemark>";
And when i call the insert method i get an internal server error.
I must be doing something wrong but i cant see what, can anybody
help ?
Here is the complete code i use :
public void doCreateFeaturesFormap(MapEntry myMap)
throws ServiceException, IOException {
final URL featureEditUrl = myMap.getFeatureFeedUrl();
FeatureEntry featureEntry = new FeatureEntry();
try {
String kmlStr = "<Placemark xmlns=\"http://www.opengis.net/kml/
2.2\">"
+ "<name>Aunt Joanas Ice Cream Shop</name>"
+ "<Point>"
+ "<coordinates>-87.74613826475604,41.90504663195118,0</
coordinates>"
+ "</Point></Placemark>";
XmlBlob kml = new XmlBlob();
kml.setFullText(kmlStr);
featureEntry.setKml(kml);
featureEntry.setTitle(new PlainTextConstruct("Feature Title"));
} catch (NullPointerException e) {
System.out.println("Error: " + e.getClass().getName());
}
FeatureEntry myFeature = (FeatureEntry) service.insert(
featureEditUrl, featureEntry);
}
Thanks in advance,
Vincent.

For future reference, it was an error in their example.
Here's the issue:
http://code.google.com/p/gdata-java-client/issues/detail?id=285
Replace setFullText(KML) with setBlob(KML)

Related

Google-drive-java-api-returns-deleted-files

I have tried to get push notification for when i am editing that drive files.
Everything was fine until I tried to delete these folders from Google Drive UI. They disappeared from the UI, but my service still receives them as if they were present.
try {
configdata = dao.getConfigByChannelId(channelId,IntegrationType.DRIVE);
System.out.println("ACCESS TOKEN FOR CHANNEL ID: " + configdata.getAccessToken());
GoogleCredential credential = new GoogleCredential().setAccessToken(configdata.getAccessToken());
Drive service = new Drive.Builder(httpTransport, jsonFactory, null)
.setApplicationName("Akoonu")
.setHttpRequestInitializer(credential).build();
Files.List files = service.files().list();
try {
Change change = service.changes().get(String.valueOf((Integer.parseInt(changeId) - 1))).execute();
System.out.println("Changed file ID: " + change.getFileId());
System.out.println("Check delete case: " + change.getDeleted());
if (change.getDeleted()) {
System.out.println("File has been deleted");
File changedFile = change.getFile();
strpath.replace(changedFile.getTitle(), "");
String path = strpath.replace(changedFile.getTitle(), "");
//deleteItem = iao.getIventoryItemByFilePathAndConfigId(changedFile.getTitle(), path, configdata.getId(), configdata.getAccountId());
deleteItem = iao.getIventoryItemByExternalId(changedFile.getId(), configdata.getId(), configdata.getAccountId());
itemService.deleteInventoryItem(deleteItem.getId(), deleteItem.getAccountId());
//deleteFilePathList.add(metadata.getPathDisplay().substring(1));
} else {
File changedFile = change.getFile();
System.out.println("Changed file Title: " + changedFile.getTitle());
.
.
.
.
.
I have tried lot of samples but still not fixed.Plz help me.Thanks
I am also facing this type of issues on my project but after a long time i got a solution .
Please use query String parameter in your code like
Files.List request = service.files().list().setQ("trashed=false");
..
Surely it will fix that issue.
THanks.
In v3 there's a "explicitlyTrashed" trashed flag, and that was what solved it for me (for now)
https://developers.google.com/drive/api/v3/reference/files

Flickr API returning unavailable image Windows Phone

Hi I'm new to Windows Phone and the flickr API's.
I've been trying to get some images and display them on the panorama view with this code:
var baseUrl = string.Format(flickString, flickrAPIKey);
string flickrResult = await client.GetStringAsync(baseUrl);
FlickrData flickrApiData = JsonConvert.DeserializeObject<FlickrData>(flickrResult);
if(flickrApiData.stat == "ok")
{
foreach (Photo data in flickrApiData.photos.photo)
{
// To retrieve one photo
// http://farm{farmid}.staticflickr.com/{server-id}/{id}_{secret}{size}.jpeg
//string photoUrl = "http://farm{0}.staticflickr.com/{1}/{2}_{3}_o.jpeg";
//string photoUrl = "http://farm{0}.staticflickr.com/{1}/{2}_{3}_b.jpeg";
string photoUrl = "http://farm{0}.staticflickr.com/{0}/{0}_{0}_n.jpeg";
string baseFlickrUrl = string.Format(photoUrl,
data.farm,
data.server,
data.id,
data.secret);
flickr1Image.Source = new BitmapImage(new Uri(baseFlickrUrl));
break;
}
}
I've tried trying different farms & servers etc but every time it still returns "This image is unavailable at this time". I dont know what I'm doing wrong here, appreciate some help.
Thanks
After Running your link, it turns out that the image extension should use jpg instead of jpeg
But I would strongly recommend you to use the extra field to get the respective url directly by using the extra attribute in the API
extras (Optional)
A comma-delimited list of extra information to fetch for each returned record.
you can use either of those: url_sq, url_t, url_s, url_q, url_m, url_n, url_z, url_c, url_l, url_o

Writing a full website to socket with microncontroller

I'm using a web server to control devices in the house with a microcontroller running .netMF (netduino plus 2). The code below writes a simple html page to a device that connects to the microcontroller over the internet.
while (true)
{
Socket clientSocket = listenerSocket.Accept();
bool dataReady = clientSocket.Poll(5000000, SelectMode.SelectRead);
if (dataReady && clientSocket.Available > 0)
{
byte[] buffer = new byte[clientSocket.Available];
int bytesRead = clientSocket.Receive(buffer);
string request =
new string(System.Text.Encoding.UTF8.GetChars(buffer));
if (request.IndexOf("ON") >= 0)
{
outD7.Write(true);
}
else if (request.IndexOf("OFF") >= 0)
{
outD7.Write(false);
}
string statusText = "Light is " + (outD7.Read() ? "ON" : "OFF") + ".";
string response = WebPage.startHTML(statusText, ip);
clientSocket.Send(System.Text.Encoding.UTF8.GetBytes(response));
}
clientSocket.Close();
}
public static string startHTML(string ledStatus, string ip)
{
string code = "<html><head><title>Netduino Home Automation</title></head><body> <div class=\"status\"><p>" + ledStatus + " </p></div> <div class=\"switch\"><p>On</p><p>Off</p></div></body></html>";
return code;
}
This works great, so I wrote a full jquery mobile website to use instead of the simple html. This website is stored on the SD card of the device and using the code below, should write the full website in place of the simple html above.
However, my problem is the netduino only writes the single HTML page to the browser, with none of the JS/CSS style files that are referenced in the HTML. How can I make sure the browser reads all of these files, as a full website?
The code I wrote to read the website from the SD is:
private static string getWebsite()
{
try
{
using (StreamReader reader = new StreamReader(#"\SD\index.html"))
{
text = reader.ReadToEnd();
}
}
catch (Exception e)
{
throw new Exception("Failed to read " + e.Message);
}
return text;
}
I replaced string code = " etc bit with
string code = getWebsite();
How can I make sure the browser reads all of these files, as a full website?
Isn't it already? Use an HTTP debugging tool like Fiddler. As I read from your code, your listenerSocket is supposed to listen on port 80. Your browser will first retrieve the results of the getWebsite call and parse the HTML.
Then it'll fire more requests, as it finds CSS and JS references in your HTML (none shown). These requests will, as far as we can see from your code, again receive the results of the getWebsite call.
You'll need to parse the incoming HTTP request to see what resource is being requested. It'll become a lot easier if the .NET implementation you run supports the HttpListener class (and it seems to).

Flex HttpService authorization to obtain xml feed

im trying to obtain a xml feed from my cpanel though API. I have tried several methods (see below) to pass the authorization to obtain the xml feed.
In my browser i can get the feed through the following way:
http://user:pass#domain.com:2086/xml-api/listaccts?
The feed example from the server:
<listaccts>
<acct>
<disklimit>2500M</disklimit>
<diskused>56M</diskused>
<domain>domain.com</domain>
<email>dot#domain.com</email>
<ip>xx.xx.xx.xx</ip>
<max_defer_fail_percentage>unlimited</max_defer_fail_percentage>
<max_email_per_hour>unlimited</max_email_per_hour>
<maxaddons>*unknown*</maxaddons>
<maxftp>5</maxftp>
<maxlst>*unknown*</maxlst>
<maxparked>*unknown*</maxparked>
<maxpop>25</maxpop>
<maxsql>1</maxsql>
<maxsub>5</maxsub>
<min_defer_fail_to_trigger_protection>5</min_defer_fail_to_trigger_protection>
<owner>root</owner>
<partition>home</partition>
<plan>Basic</plan>
<shell>/usr/local/cpanel/bin/noshell</shell>
<startdate>13 Feb 17 07:05</startdate>
<suspended>0</suspended>
<suspendreason>not suspended</suspendreason>
<suspendtime/>
<theme>x3</theme>
<unix_startdate>1361109935</unix_startdate>
<user>xxxxxxxx</user>
</acct>
</listaccts>
My Application script:
<s:HTTPService id="clientList" method="GET" resultFormat="e4x"/>
In Scripts:
[Bindable]
private var clientInfo:Object = new Object();
private function clients(event:Event):void{
clientList.url = 'http://' +loginUsername.text
clientList.url += ':' + loginPassword.text
clientList.url += '#' + loginServer.text;
clientList.url += ':2086/xml-api/listaccts?';
clientList.addEventListener("result", clientResult);
clientList.addEventListener("fault", clientFault);
clientList.send();
CursorManager.setBusyCursor();
}
public function clientResult(event:ResultEvent):void {
clientInfo = clientList.lastResult.acct;
CursorManager.removeBusyCursor();
}
public function clientFault(event:FaultEvent):void {
var faultstring:String = event.fault.faultString;
Alert.show("Unable to obtain client list","Error");
CursorManager.removeBusyCursor();
}
What am i doing wrong, i keep getting the error (Unable to obtain client list), i think i could be trying the authentication way i use.
Don't concatenate a string to add your username and password.
Call this as a WebService and use setCredentials.

Calling wkhtmltopdf to generate PDF from HTML

I'm attempting to create a PDF file from an HTML file. After looking around a little I've found: wkhtmltopdf to be perfect. I need to call this .exe from the ASP.NET server. I've attempted:
Process p = new Process();
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = HttpContext.Current.Server.MapPath("wkhtmltopdf.exe");
p.StartInfo.Arguments = "TestPDF.htm TestPDF.pdf";
p.Start();
p.WaitForExit();
With no success of any files being created on the server. Can anyone give me a pointer in the right direction? I put the wkhtmltopdf.exe file at the top level directory of the site. Is there anywhere else it should be held?
Edit: If anyone has better solutions to dynamically create pdf files from html, please let me know.
Update:
My answer below, creates the pdf file on the disk. I then streamed that file to the users browser as a download. Consider using something like Hath's answer below to get wkhtml2pdf to output to a stream instead and then send that directly to the user - that will bypass lots of issues with file permissions etc.
My original answer:
Make sure you've specified an output path for the PDF that is writeable by the ASP.NET process of IIS running on your server (usually NETWORK_SERVICE I think).
Mine looks like this (and it works):
/// <summary>
/// Convert Html page at a given URL to a PDF file using open-source tool wkhtml2pdf
/// </summary>
/// <param name="Url"></param>
/// <param name="outputFilename"></param>
/// <returns></returns>
public static bool HtmlToPdf(string Url, string outputFilename)
{
// assemble destination PDF file name
string filename = ConfigurationManager.AppSettings["ExportFilePath"] + "\\" + outputFilename + ".pdf";
// get proj no for header
Project project = new Project(int.Parse(outputFilename));
var p = new System.Diagnostics.Process();
p.StartInfo.FileName = ConfigurationManager.AppSettings["HtmlToPdfExePath"];
string switches = "--print-media-type ";
switches += "--margin-top 4mm --margin-bottom 4mm --margin-right 0mm --margin-left 0mm ";
switches += "--page-size A4 ";
switches += "--no-background ";
switches += "--redirect-delay 100";
p.StartInfo.Arguments = switches + " " + Url + " " + filename;
p.StartInfo.UseShellExecute = false; // needs to be false in order to redirect output
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true; // redirect all 3, as it should be all 3 or none
p.StartInfo.WorkingDirectory = StripFilenameFromFullPath(p.StartInfo.FileName);
p.Start();
// read the output here...
string output = p.StandardOutput.ReadToEnd();
// ...then wait n milliseconds for exit (as after exit, it can't read the output)
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
// if 0 or 2, it worked (not sure about other values, I want a better way to confirm this)
return (returnCode == 0 || returnCode == 2);
}
I had the same problem when i tried using msmq with a windows service but it was very slow for some reason. (the process part).
This is what finally worked:
private void DoDownload()
{
var url = Request.Url.GetLeftPart(UriPartial.Authority) + "/CPCDownload.aspx?IsPDF=False?UserID=" + this.CurrentUser.UserID.ToString();
var file = WKHtmlToPdf(url);
if (file != null)
{
Response.ContentType = "Application/pdf";
Response.BinaryWrite(file);
Response.End();
}
}
public byte[] WKHtmlToPdf(string url)
{
var fileName = " - ";
var wkhtmlDir = "C:\\Program Files\\wkhtmltopdf\\";
var wkhtml = "C:\\Program Files\\wkhtmltopdf\\wkhtmltopdf.exe";
var p = new Process();
p.StartInfo.CreateNoWindow = true;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = wkhtml;
p.StartInfo.WorkingDirectory = wkhtmlDir;
string switches = "";
switches += "--print-media-type ";
switches += "--margin-top 10mm --margin-bottom 10mm --margin-right 10mm --margin-left 10mm ";
switches += "--page-size Letter ";
p.StartInfo.Arguments = switches + " " + url + " " + fileName;
p.Start();
//read output
byte[] buffer = new byte[32768];
byte[] file;
using(var ms = new MemoryStream())
{
while(true)
{
int read = p.StandardOutput.BaseStream.Read(buffer, 0,buffer.Length);
if(read <=0)
{
break;
}
ms.Write(buffer, 0, read);
}
file = ms.ToArray();
}
// wait or exit
p.WaitForExit(60000);
// read the exit code, close process
int returnCode = p.ExitCode;
p.Close();
return returnCode == 0 ? file : null;
}
Thanks Graham Ambrose and everyone else.
OK, so this is an old question, but an excellent one. And since I did not find a good answer, I made my own :) Also, I've posted this super simple project to GitHub.
Here is some sample code:
var pdfData = HtmlToXConverter.ConvertToPdf("<h1>SOO COOL!</h1>");
Here are some key points:
No P/Invoke
No creating of a new process
No file-system (all in RAM)
Native .NET DLL with intellisense, etc.
Ability to generate a PDF or PNG (HtmlToXConverter.ConvertToPng)
Check out the C# wrapper library (using P/Invoke) for the wkhtmltopdf library: https://github.com/pruiz/WkHtmlToXSharp
There are many reason why this is generally a bad idea. How are you going to control the executables that get spawned off but end up living on in memory if there is a crash? What about denial-of-service attacks, or if something malicious gets into TestPDF.htm?
My understanding is that the ASP.NET user account will not have the rights to logon locally. It also needs to have the correct file permissions to access the executable and to write to the file system. You need to edit the local security policy and let the ASP.NET user account (maybe ASPNET) logon locally (it may be in the deny list by default). Then you need to edit the permissions on the NTFS filesystem for the other files. If you are in a shared hosting environment it may be impossible to apply the configuration you need.
The best way to use an external executable like this is to queue jobs from the ASP.NET code and have some sort of service monitor the queue. If you do this you will protect yourself from all sorts of bad things happening. The maintenance issues with changing the user account are not worth the effort in my opinion, and whilst setting up a service or scheduled job is a pain, its just a better design. The ASP.NET page should poll a result queue for the output and you can present the user with a wait page. This is acceptable in most cases.
You can tell wkhtmltopdf to send it's output to sout by specifying "-" as the output file.
You can then read the output from the process into the response stream and avoid the permissions issues with writing to the file system.
My take on this with 2018 stuff.
I am using async. I am streaming to and from wkhtmltopdf. I created a new StreamWriter because wkhtmltopdf is expecting utf-8 by default but it is set to something else when the process starts.
I didn't include a lot of arguments since those varies from user to user. You can add what you need using additionalArgs.
I removed p.WaitForExit(...) since I wasn't handling if it fails and it would hang anyway on await tStandardOutput. If timeout is needed, then you would have to call Wait(...) on the different tasks with a cancellationtoken or timeout and handle accordingly.
public async Task<byte[]> GeneratePdf(string html, string additionalArgs)
{
ProcessStartInfo psi = new ProcessStartInfo
{
FileName = #"C:\Program Files\wkhtmltopdf\wkhtmltopdf.exe",
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true,
RedirectStandardOutput = true,
RedirectStandardError = true,
Arguments = "-q -n " + additionalArgs + " - -";
};
using (var p = Process.Start(psi))
using (var pdfSream = new MemoryStream())
using (var utf8Writer = new StreamWriter(p.StandardInput.BaseStream,
Encoding.UTF8))
{
await utf8Writer.WriteAsync(html);
utf8Writer.Close();
var tStdOut = p.StandardOutput.BaseStream.CopyToAsync(pdfSream);
var tStdError = p.StandardError.ReadToEndAsync();
await tStandardOutput;
string errors = await tStandardError;
if (!string.IsNullOrEmpty(errors)) { /* deal/log with errors */ }
return pdfSream.ToArray();
}
}
Things I haven't included in there but could be useful if you have images, css or other stuff that wkhtmltopdf will have to load when rendering the html page:
you can pass the authentication cookie using --cookie
in the header of the html page, you can set the base tag with href pointing to the server and wkhtmltopdf will use that if need be
Thanks for the question / answer / all the comments above. I came upon this when I was writing my own C# wrapper for WKHTMLtoPDF and it answered a couple of the problems I had. I ended up writing about this in a blog post - which also contains my wrapper (you'll no doubt see the "inspiration" from the entries above seeping into my code...)
Making PDFs from HTML in C# using WKHTMLtoPDF
Thanks again guys!
The ASP .Net process probably doesn't have write access to the directory.
Try telling it to write to %TEMP%, and see if it works.
Also, make your ASP .Net page echo the process's stdout and stderr, and check for error messages.
Generally return code =0 is coming if the pdf file is created properly and correctly.If it's not created then the value is in -ve range.
using System;
using System.Diagnostics;
using System.Web;
public partial class pdftest : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
private void fn_test()
{
try
{
string url = HttpContext.Current.Request.Url.AbsoluteUri;
Response.Write(url);
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName =
#"C:\PROGRA~1\WKHTML~1\wkhtmltopdf.exe";//"wkhtmltopdf.exe";
startInfo.Arguments = url + #" C:\test"
+ Guid.NewGuid().ToString() + ".pdf";
Process.Start(startInfo);
}
catch (Exception ex)
{
string xx = ex.Message.ToString();
Response.Write("<br>" + xx);
}
}
protected void btn_test_Click(object sender, EventArgs e)
{
fn_test();
}
}