Does App Engine handle character encoding differently online vs. locally? - mysql

I've written a small web-app which pulls data from a simple mysql DB and displays it via Python and Flask. The text fields in the database are encoded as utf8_general_ci, and some contain special characters - for example, 'Zürich'.
As Flask/Jinja2 like to work on unicode, after the query the strings are converted to unicode to be passed into the template. What's weird is I need a different approach to convert to unicode, depending on whether the code is running locally (Mac laptop) or deployed on GAE.
This works when running locally:
return [[unicode(c, encoding='utf-8') if isinstance(c, basestring) else c for c in b] for b in l]
This works when deployed on GAE:
return [[unicode(c, encoding='latin-1') if isinstance(c, basestring) else c for c in b] for b in l]
.
If I run the GAE version locally, 'Zürich' is displayed as 'Zürich'. Vice versa, I get an UnicodeDecode error.
As far as I can tell, the two databases are identical - the online version is a straight dump of the local version.

ü is the Mojibake for ü.
See "Mojibake" in here for a discussion of the likely causes.
See here for Python notes on what to do in the source code. I don't know about Jinja specifics.

Related

"We found extra characters at the end of json" Excel Power Query

I want to use Excel Power Query to import JSON data from web and transform the data afterwards. I have 2 versions of JSON Data, one development version and one live version. Both versions had no problem prior to last week and could connect without any issues.
Last week the live version stopped working and couldnt connect through Excel or PowerBI anymore. Curiously, nothing has been changed in both versions that could've contributed to this error happening, there were absolutely no hands in either versions during that time frame.
This is the dev connection string in PowerQuery thats working without problems.
Source=
Json.Document(Web.Contents("https://www.dev.com/dummy/YXC/JSONDATA")),
Now when I try to change it to this
Source=
Json.Document(Web.Contents("https://www.live.com/dummy/YXC/JSONDATA")),
the error message: "[DataFormatError] We found extra characters at the end of JSON input." arises
After this and some troubleshooting, I opened a new empty Excel file to put the link directly into the "from Web" tab. The connection fails and I get up to 2 Errors.
"[DataFormatError] We found extra characters at the end of JSON input."
"Details: The Ressource under "https://www.live.com/dummy/YXC/JSONDATA" cant be opened through Web.Page since its apparently no website"
Before the website can be openend one must have an authorized microsoft organisation account.
In Chrome the data shows up without any issues and there are no authorization issues, also when I download the JSON data and import it through a file directly there are no issues either. Same error occurs with other accounts.
Since it says DataFormatError, I started looking into the JSON Data. The JSON Data doesnt have any whitespace and is identical with the working dev version.
Does anybody encountered this or has a clue on how to fix this?

Invalid Character in ReportServer

When I am trying to execute few reports in ReportManager its throwing error.
The attempt to connect to the report server failed.Check your connection information and that the report server is a compatible version.
There is an error in XML document(1,134206).
'',hexadecimal value 0x0C, is an invalid character. Line 1, position 134206.
When I execute form report server it executing successfully.
The same rdl file is working perfectly in other System using ReportManager.
What can be the issue?
And how can we produce this error in the working system? And how can we solve this error?
The character producing the error is 0x0C, which is a char FF, escaped as \f,
sometimes used as a page or section break.
As a first troubleshooting step you can remove this char and see if the report works.
You can find this char in your rdl if you open it with Notepad++, for instance,and search for \f (in search mode extended). You can then remove this char and rerun.
The second step is to determine why it works in one system and not in another, which could well be down to a difference in the SSRS and/or OS version of the systems in question.
In one of the field, there is some invalid character as in the image.
And report Manager is unable to process it while executing rdl file.
We got the DB backup from the client and replicated in other systems as well.
Invalid Character

yii2 SluggableBehavior not transform cyrillic to latin

I am using
yii\behaviors\SluggableBehavior
In order to transform cyrillic title to converted alias in lattin.
It works nice at my local but When I uploaded the same code into server It doesnt work. It ignores cyrillic chars. It seems problem is not in code because It works nice at local server ( Maybe I need to install some extension for it ?
thanks

chrome "aw, snap" crash, but can't see crash log in chrome://crashes but see DMP generated, so what is the quickest way to interpret this DMP file

I can see my page get crash(see aw, snap page) with 20% proprobility after 10 mins(otherwise it runs well like forever)
so I tried:
1) CPU and memory check with task manager, and see no increasing(so no leakage).
2) enable crush log in the chrome://settings/
result:
2.1) see still nothing in the chrome://crashes page, not even a crush ID (0 crashes).
2.2) see nothing in the folder under path
C:/%User%/AppData/Local/Google/CrashReports (nothing in) nor
C:/%User%/AppData/Local/Google/Chrome/User Data/Crash Reports (folder not exist)
2.3) but indeed see DMP in the:
C:/%User%/AppData/Local/Google/Chrome/User Data/CrashPads/reports
but seems they are not readable, and it also seems not the correct address for crash logs
3) can get chrome log either by command line arguments, or using sawbuck, but found nothing but only 2 errors, one for sawbuck itself, and another saying can't send the report to google.
So the questions are:
1) are those DMP the crash logs(the default Dir for dump file has been changed for chrome v50)
2) how can I abstract information out of the DMP file, if chrome://crashes page shows nothing (for chrome on windows)
p.s. 2 usage pages are found at https://www.chromium.org/developers/decoding-crash-dumps
https://www.chromium.org/developers/crash-reports
but seems it's not for windows without a recompile of chrome's component, is there any 3rd party tools to interpret the DMP file?
env informations:
chrome version: 50.0.2661.02 m
; Host OS: windows 10
The crash dumps (.dmp files) in C:\Users\<user>\AppData\Local\Google\Chrome\User Data\Crashpad\reports can be read by standard Windows debuggers. WinDbg is one tool (provided by Microsoft) for analysing these dumps; it's not going to win any beauty contents, but it's powerful and gets the job done. The recommended way to obtain it is, somewhat bizarrely, the Windows Driver Kit.
You'll need debugging symbols to make sense of the results, and these aren't included in standard builds of Chrome. To get symbols for both Chrome and the Windows runtime, set the following as your Symbols path:
SRV*c:\symbols*https://msdl.microsoft.com/download/symbols;SRV*c:\symbols*https://chromium-browser-symsrv.commondatastorage.googleapis.com
There are numerous resources on using WinDbg on the web; this cheat sheet contains some useful commands to get you started.

How can I convert Wikitext Markup containing the double curly bracket functions, into plaintext or html?

I am creating a customized Wiki Markup parser/interpreter. There is a big task however in regards to interpreting functions like these:
{{convert|500|ft|m|0}}
which is converted like so:
500 feet (152 m)
I'd like to avoid having to manually code interpretations of these functions, and would rather employ a method where I query a string
+akiva#akiva-ThinkPad-X230:~$ wiki-to-text "convert|3|to(-)|6|ft|abbr=on}}"
and get a return of:
"3 to 6 ft (0.91–1.83 m)"
Is there a tool to do this? Offline is by far the most ideal solution, but I could live with having to query a server.
You could query the MediaWiki api to get a parsed text from wikitext. E.g. to parse the template Template:Done from the english wikipedia you could use: https://en.wikipedia.org/w/api.php?action=parse&text={{Template:done}}&title=Test (see the online docs for parse). You, however, need a MediaWiki instance that provides a template that you want to parse and which works in the exact same way. If you install a webserver locally, you can install your own MediaWiki instance and parse wikitext locally, too.
Btw.: There's the Parsoid project, too, which implements a node-based wikitext->html->wikitext parser. However, it, iirc, still needs to query the api of the wiki to parse templates.