I use the powerbuilder ole object to encode/decode the string like JSON, but when I use the ole object, I don't know how put my data to encoding it
Here is my testing data:
my testing data is like that [{"ref":"T213445677","pickdtime":"2018-02-02 09:00:00","compname":"Wing Kei Shoes Company"}]
Here is my coding
OleObject wsh
Integer li_rc
string ls_temp
wsh = CREATE OleObject
li_rc = wsh.ConnectToNewObject( "MSScriptControl.ScriptControl" )
wsh.language = "javascript"
ls_temp = wsh.Eval("escape('[{"ref":"T213445677","pickdtime":"2018-02-02
09:00:00","compname":"Wing Kei Shoes Company"}]')")
MessageBox( "ESCAPE" , ls_temp)
ls_temp = wsh.Eval("unescape('" + ls_temp + "')")
MessageBox( "UNESCAPE" , ls_temp)
You might want to look at this article regarding a JSON parser written in Visual Basic for Applications (VBA). It can be found here: http://ashuvba.blogspot.com/2014/09/json-parser-in-vba-browsing-through-net.html
The current version of PowerBuilder (2017 R2) has native JSON parsing built into the datawindow.
I have an example app that shows how to post messages to Twitter from PowerBuilder. It includes functions for encoding. The zip file includes PB 8 and PB 10 versions.
Related
I'm trying to connect to a REST web service using Delphi 6. I've been able to successfully do this with the code below:
function WSConnect(url: WideString; params: TStringList = nil; method: WideString = 'GET'):WideString;
var
request: XMLHTTP60;
begin
// CreateQueryString below just adds any params passed in to the end of the url
url := url + CreateQueryString(params);
CoInitialize(nil);
request := CoXMLHTTP60.Create;
request.open(method,url,false,'','');
request.send('');
if request.status=200 then
begin
Result := request.responseText;
end else
Result := 'Error: (' + inttostr(request.Status) + ') ' + request.StatusText;
CoUninitialize();
end;
I've pieced together this code from different examples around the net, but I can't find any documentation on CoXMLHTTP60. The code works great, but the current implementation only gives me back a string with the JSON response in it, and I'm struggling to figure out how to pull the properties out of this JSON string in Delphi 6. I've looked at SuperObjects (which people say has problems with Delphi 6 so I didn't try it) and a different project called json4delphi that I couldn't get to compile in Delphi 6 despite it saying it is compatible.
There are also different properties on the CoXMLHTTP60 response object that I believe might get me the values I need. They are:
request.responseBody: OleVariant;
request.responseStream: OleVariant;
request.responseXML: IDispatch;
I would think one of these would provide access to the properties in the JSON, but I can't figure out how to even use them. I'm pretty unfamiliar with IDispatch, once again documentation is difficult to find, and what's there looks very confusing. Some places say to just assign the IDispatch property to an OleVariant and then I can access the properties directly with dot notation, but that doesn't seem to be working either for me, so maybe I'm doing it wrong.
Does anyone have any knowledge on how to pull out the response properties in a CoXMLHTTP60 call?
I'm trying to build a sample application in java for Japaneses language that will read an image file and just output the text extracted from the image. I found one sample application on net which is running perfect for English Language but not for Japanees it is giving unidentified text, following is my code:
BytePointer outText;
TessBaseAPI api = new TessBaseAPI();
// Initialize tesseract-ocr with japanees, without specifying tessdata path
if (api.Init(".", "jpn") != 0) {
System.err.println("Could not initialize tesseract.");
System.exit(1);
}
// Open input image with leptonica library
PIX image = pixRead("test.png");
api.SetImage(image);
// Get OCR result
outText = api.GetUTF8Text();
String string = outText.getString();
assertTrue(!string.isEmpty());
System.out.println("OCR output:\n" + string);
// Destroy used object and release memory
api.End();
outText.deallocate();
pixDestroy(image);
my output is:
OCR output:
ETCカー-ード申 込書
�申�込�日 09/02/2017
ETC FeatureID ETCFFL
ー申込枚輩交 画 枚
i has used jpn.tessdata and my application is reading tessdata file also. is any more configration needed? i'm using Tessaract 3.02 version with very clean image.
Yes! i got the solution, what we need to do is to set the locale in our java code as follows:
olocale = new Locale.Builder().setLanguage("ja").setRegion("JP").build();
we can set locale for English language also in order to extract both Japanese as well as English text from Image.
now it is working like charm for me!!
I am struggling to make this work and have trawled for examples on how to fix this to no avail. I am converting a mysql resultset into an xml file to upload to ebaymotorspro. I thought this would be relatively simple and yet I am struggling with the conventions set out by both ebay and the .net framework.
The opening element of the file has to read:
<empro xmlns="urn:de:mobile:emp:inventory:xml:uk:car" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:de:mobile:emp:inventory:xml:uk:car http://www.ebaymotorspro.co.uk/schema/empro-car-uk.xsd">
I am using the xmlwriter class to recreate this and have this so far:
Using writer as XmlWriter = XmlWriter.Create(feedfile, xmlsettings)
writer.WriteStartDocument(True)
writer.WriteStartElement("empro", "urn:de:mobile:inventory:xml:uk:car")
' This Bit is causing the issue
writer.WriteAttributeString("xmlns", "xsi", "http://www.w3.org/2001/XMLScema-instance")
End Using
I end up with the following code in the xml file:
<empro p1:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p1="xmlns" xmlns="urn:de:mobile:emp:inventory:xml:uk:car" />
Which isnt correct, can anyone please point me in the right direction to make this output the correct document header?
Many thanks
Graham
The part that causing you a trouble should've been written using WriteAttributeString() overload which accept four parameters :
' This Bit is causing the issue'
writer.WriteAttributeString("xmlns", "xsi", Nothing, "http://www.w3.org/2001/XMLScema-instance")
Anyway, I will personally do this using LINQ-to-XML. VB even has exclusive features which C# doesn't for constructing XML : XML literals with embedded expression support. For example :
Dim contactName As String = "Patrick Hines"
Dim contact As XElement =
<contact><%= contactName %></contact>
Console.WriteLine(contact.ToString())
'output :'
'<contact>Patrick Hines</contact>'
Source : MSDN: Creating XML in Visual Basic
I am creating a program that takes an email and converts it into json. In the interest of learning functional languages I thought it would be a good opportunity to learn F#. So first off I want a message class that can then easily be serialized into json. This class will have some members that can be null. Because of these optional parameters I am using the following format
type Email(from:string, recipient:string, ?cc:string .........) =
member x.From = from
member x.Recipient = recipient.....
This class actually has a lot of members as I want to store every piece of information that an email could have. What happens is if I try to format the constructor myself by splitting it onto multiple lines I get warnings. How do people make such a gigantic constructor look good in f#? Is there a better way to do this?
On a similar note, I'm not finding visual studio very helpful for f#, what are other people using to code? I think there is a mode for emacs for example...
There should not be any warnings if all "newlined" parameters start at the same column. For example:
type Email(from:string, recipient:string,
?cc:string) =
member x.From = from
member x.Recipient = recipient
(I personally enjoy using Visual Studio for F#. I doubt there is an IDE with more complete support.)
As an alternative to creating constructor that takes all properties as parameters, you can also create an object with mutable properties and then specify only those that you want to set in the construction:
type Email(from:string, recipient:string) =
member val From = from with get, set
member val Recipient = recipient with get, set
member val Cc = "" with get, set
member val Bcc = "" with get, set
let eml = Email("me#me.com", "you#you.com", Cc="she#she.net")
This is using a new member val feature of F# 3.0. Writing the same in F# 2.0 would be pretty annoying, because you'd have to define getter and setter by hand.
I am trying to scrape an html table and save its data in a database. What strategies/solutions have you found to be helpful in approaching this program.
I'm most comfortable with Java and PHP but really a solution in any language would be helpful.
EDIT: For more detail, the UTA (Salt Lake's Bus system) provides bus schedules on its website. Each schedule appears in a table that has stations in the header and times of departure in the rows. I would like to go through the schedules and save the information in the table in a form that I can then query.
Here's the starting point for the schedules
It all depends on how properly your HTML to scrape is? If it's valid XHTML, you can simply use some XPath queries on it to get whatever you want.
Example of xpath in php: http://blogoscoped.com/archive/2004_06_23_index.html#108802750834787821
A helper class to scrape a table into an array: http://www.tgreer.com/class_http_php.html
There is a nice book about this topic: Spidering Hacks by Kevin Hemenway and Tara Calishain.
I've found that scripting languages are generally better suited for doing such tasks. I personally prefer Python, but PHP will work as well. Chopping, mincing and parsing strings in Java is just too much work.
I have tried screen-scraping before, but I found it to be very brittle, especially with dynamically-generated code.
I found a third-party DOM-parser and used it to navigate the source code with Regex-like matching patterns in order to find the data I needed.
I suggested trying to find out if the owners of the site have a published API (often Web Services) for retrieving data from their system. If not, then good luck to you.
If what you want is a form a csv table then you can use this:
using python:
for example imagine you want to scrape forex quotes in csv form from some site like: fxoanda
then...
from BeautifulSoup import BeautifulSoup
import urllib,string,csv,sys,os
from string import replace
date_s = '&date1=01/01/08'
date_f = '&date=11/10/08'
fx_url = 'http://www.oanda.com/convert/fxhistory?date_fmt=us'
fx_url_end = '&lang=en&margin_fixed=0&format=CSV&redirected=1'
cur1,cur2 = 'USD','AUD'
fx_url = fx_url + date_f + date_s + '&exch=' + cur1 +'&exch2=' + cur1
fx_url = fx_url +'&expr=' + cur2 + '&expr2=' + cur2 + fx_url_end
data = urllib.urlopen(fx_url).read()
soup = BeautifulSoup(data)
data = str(soup.findAll('pre', limit=1))
data = replace(data,'[<pre>','')
data = replace(data,'</pre>]','')
file_location = '/Users/location_edit_this'
file_name = file_location + 'usd_aus.csv'
file = open(file_name,"w")
file.write(data)
file.close()
once you have it in this form you can convert the data to any form you like.
At the risk of starting a shitstorm here on SO, I'd suggest that if the format of the table never changes, you could just about get away with using Regularexpressions to parse and capture the content you need.
pianohacker overlooked the HTML::TableExtract module, which was designed for exactly this sort of thing. You'd still need LWP to retrieve the table.
This would be by far the easiest with Perl, and the following CPAN modules:
http://metacpan.org/pod/HTML::Parser
http://metacpan.org/pod/LWP
http://metacpan.org/pod/DBD/mysql
http://metacpan.org/pod/DBI.pm
CPAN being the main distribution mechanism for Perl modules, and accessible by running the following shell command, for example:
# cpan HTML::Parser
If you're on Windows, things will be more interesting, but you can still do it: http://www.perlmonks.org/?node_id=583586