p-code of an if and block - actionscript-3

I was checking the p-code of a script, and I noticed this in the p-code:
ofs013c:iffalse ofs0156
...
ofs0156:iffalse ofs020d
...
ofs020d:...
this is a basic structure for something like this:
if('checksomething' && 'checksomethingelse')
{'dosomething'}
'continue'
Since it is an AND operator, shouldn't jump address in the first line of the p-code ofs013c:iffalse ofs0156 be ofs013c:iffalse ofs020d instead? Even though no "proper" checks are done in ofs013c:iffalse ofs020d (if jumped to there from the first check), I wasn't expecting a jump there.
Is this a fault of the tool that I am using? Or is this how it goes in AS? Or is this how short-circuit looks like in pretty much anything?
Sorry, I have a limited knowledge in AS in general, maybe this isn't even related to AS at all. Most of the time I have to write in p-code rather than AS itself, and when I noticed this wanted to ask why it is like so. Anyway, have a nice day!

Related

Disable code completion for a shortcut in PhpStorm

I stumbled across this, when I tried solving this: Searching for Live Template in PhpStorm, from abbreviation (wrong expansion?).
TL;DR
In PhpStorm, in a markdown-file, when I write 3 back-ticks and press enter, it expands to this:
```angular2html
```
And I would like PhpStorm to stop helping me here, and just type what I typed:
```
```
Detailed description and solution attempts
In a Markdown-file (readme.md), I use it often to write a code block with three ticks:
```
But when I do so, PhpStorm suggest a bunch of stuff:
So if I press Enter here, then it expands to:
```angular2html
```
So how do I change this behaviour?
Solution attempt 1: Deactivate the bundled Angular-plugin
There is a (bundled) plugin called: 'Angular and AngularJS'. If I deactivate that, then it looks like this:
and expands to this:
```apacheconf
```
Solution attempt 2: Ensure Markdown is supported
I'm really baffled why this happens. Does PhpStorm not realize that I'm in an .md-file?
And/or is this the usual desired behaviour, when people write three backticks in a and .md-file?
I can confirm that I have an active (bundled) plugin called: 'Markdown' (version 222.3739.61).
Solution attempt 3: Add a new Live Template
This actually works, by making a new Live Template, to be this:
```
$END$
Remember the blank line underneath. This is since I would like to press Enter, and that replaces the first 3 ticks with this, but not the ones PhpStorm automatically adds dues to auto-closing of brackets and backticks.
This solution attempt seems quite hacky, though. :-/
From https://youtrack.jetbrains.com/issue/IDEA-266239
The pop up was created to make specification of code fence language easier and to enable automatic code injection. We can’t disable it since it would affect a lot of users using it for fast code injection.
angular2html is the first entry in your case. Somebody else may have another language ID (e.g. aidl).
Right now it cannot be disabled or customized. So you will have to either use your own workaround (with Live Template) or press Esc before hitting Enter.
Other than that: watch that IDEA-266239 ticket (star/vote/comment) to get notified with any progress. No better suggestions right now.

Simple macros for HTML

My html file contains in many places the code
It is too short and it doesn't really make sense to replace it with a code like
<span class="three-spaces"></span>
I would like to replace it with something like
##TS##
or
%%TS%%
and the file should start with something like:
SET TS = " "
Is there any way to write the HTML this way? I am not looking for compiling a source file into a HTML. I am looking for a solution that allows directly writing macros into HTML files.
Later edit: I'm coming with another example:
I also need to transform
lnk(http://www.example.com)
into
<a target="_blank" href="http://www.example.com">http://www.example.com</a>
Instead of telling him WHY he should not do something, how about telling him HOW he could do it? Maybe his example is not an appropriate need for it, but there's other situations where being able to create a macro would be nice.
For example... I have an HTML page that I'm working on that deals with unit conversions and quite often, I'm having to type things like "cm/in" as "cm/in" or for volumes "cu-cm/cu-in" as "cm3/in3". It would be really nice from a typing and readability standpoint if I could create macros that were just typed as "%%cm-per-in%%, %%cc-per-cu-in%% or something like that.
So, the line in the 'sed' file might look like this:
s/%%cc-per-cu-in%%/<sup>cm<sup>3<\/sup><\/sup>\/<sub>in<sup>3<\/sup><\/sub>/g
Since the "/" is a field separator for the substitute command, you need to explicitly quote it with the backslash character ("\") within the replacement portion of the substitute command.
The way that I have handled things like this in the past was to either write my own preprocessor to make the changes or if the "sed" utility was available, I would use it. So for this sort of thing, I would basically have a "pre-HTML" file that I edited and after running it through "sed" or the preprocessor, it would generate an HTML file that I could copy to the web server.
Now, you could create a javascript function that would do the text substitution for you, but in my opinion, it is not as nice looking as an actual preprocessor macro substitution. For example, to do what I was doing in the sed script, I would need to create a function that would take as a parameter the short form "nickname" for the longer HTML that would be generated. For example:
function S( x )
{
if (x == "cc-per-cu-in") {
document.write("<sup>cm<sup>3</sup></sup>/<sub>in<sup>3</sup></sub>");
} else if (x == "cm-per-in") {
document.write("<sup>cm</sup>/<sub>in</sub>");
} else {
document.write("<B>***MACRO-ERROR***</B>");
}
}
And then use it like this:
This is a test of cc-per-cu-in <SCRIPT>S("cc-per-cu-in");</SCRIPT> and
cm-per-in <SCRIPT>S("cm-per-in");</SCRIPT> as an alternative to sed.
This is a test of an error <SCRIPT>S("cc-per-in");</SCRIPT> for a
missing macro substitution.
This generates the following:
This is a test of cc-per-cu-in cm3/in3
and cm-per-in cm/in as an alternative to sed. This is a test of an error MACRO-ERROR for a missing macro substitution.
Yeah, it works, but it is not as readable as if you used a 'sed' substitution.
So, decide for yourself... Which is more readable...
This...
This is a test of cc-per-cu-in <SCRIPT>S("cc-per-cu-in");</SCRIPT> and
cm-per-in <SCRIPT>S("cm-per-in");</SCRIPT> as an alternative to sed.
Or this...
This is a test of cc-per-cu-in %%cc-per-cu-in%% and
cm-per-in %%cm-per-in% as an alternative to sed.
Personally, I think the second example is more readable and worth the extra trouble to have pre-HTML files that get run through sed to generate the actual HTML files... But, as the saying goes, "Your mileage may vary"...
EDITED: One more thing that I forgot about in the initial post that I find useful when using a pre-processor for the HTML files -- Timestamping the file... Often I'll have a small timestamp placed on a page that says the last time it was modified. Instead of manually editing the timestamp each time, I can have a macro (such as "%%DATE%%", "%%TIME%%", "%%DATETIME%%") that gets converted to my preferred date/time format and put in the file.
Since my background is in 'C' and UNIX, if I can't find a way to do something in HTML, I'll often just use one of the command line tools under UNIX or write a small 'C' program to do it. My HTML editing is always in 'vi' (or 'vim' on the PC) and I find that I am often creating tables for alignment of various portions of the HTML page. I got tired of typing all the TABLE, TR, and TD tags, so I created a simple 'C' program called 'table' that I can execute via the '!}' command in 'vi', similar to how you execute the 'fmt' command in 'vi'. It takes as parameters the number of rows & columns to create, whether the column cells are to be split across two lines, how many spaces to indent the tags, and the column widths and generates an appropriately indented TABLE tag structure. Just a simple utility, but saves on the typing.
Instead of typing this:
<TABLE>
<TR>
<TD width=200>
</TD>
<TD width=300>
</TD>
</TR>
<TR>
<TD>
</TD>
<TD>
</TD>
</TR>
<TR>
<TD>
</TD>
<TD>
</TD>
</TR>
</TABLE>
I can type this:
!}table -r 3 -c 2 -split -w 200 300
Now, with respect to the portion of the original question about being able to create a macro to do HTML links, that is also possible using 'sed' as a pre-processor for the HTML files. Let's say that you wanted to change:
%%lnk(www.stackoverflow.com)
to:
www.stackoverflow.com
you could create this line in the sed script file:
s/%%lnk(\(.*\))/<a href="\1">\1<\/a>/g
'sed' uses regular expressions and they are not what you might call 'pretty', but they are powerful if you know what you are doing.
One slight problem with this example is that it requires the macro to be on a single line (i.e. you cannot split the macro across lines) and if you call the macro multiple times in a single line, you get a result that you might not be expecting. Instead of doing the macro substitution multiple times, it assumes the argument to the macro starts with the first '(' of the first macro invocation and ends with the last ')' of the last macro invocation. I'm not a sed regular expression expert, so I haven't figured out how to fix this yet. For the multiple line portion though, a possible fix would be to replace all the LF characters in the file with some other special character that would not normally be used, run sed on that result, and then convert the special characters back to LF characters. Of course, the problem there is that the entire file would be a single line and if you are invoking the macro, it is going to have the results that I described above. I suspect awk would not have that problem, but I have never had a need to learn awk.
Upon further reflection, I think there might be an easier solution to both the multi-line and multiple invocation of a macro on a single line -- the 'm4' macro preprocessor that comes with the 'C' compiler (e.g. gcc). I haven't tested it much to see what the downside might be, but it seems to work well enough for the tests that I have performed. You would define a macro as such in your pre-HTML file:
define(`LNK', `$1')
And yeah, it does use the backwards single quote character to start the text string and the normal single quote character to end the text string.
The only problem that I've found so far is that is that for the macro names, it only allows the characters 'A'-'Z', 'a'-'z', '0'-'9', and '' (underscore). Since I prefer to type '-' instead of '', that is a definite disadvantage to me.
Technically inline JavaScript with a <script> tag could do what you are asking. You could even look into the many templating solutions available via JavaScript libraries.
That would not actually provide any benefit, though. JavaScript changes what is ultimately displayed, not the file itself. Since your use case does not change the display it wouldn't actually be useful.
It would be more efficient to consider why is appearing in the first place and fix that.
This …
My html file contains in many places the code
… is actually what is wrong in your file!
is not meant to use for layout purpose, you should fix that and use CSS instead to layout it correctly.
is meant to stop breaking words at the end of a line that are seperated by a space. For example numbers and their unit: 5 liters can end up with 5 at the end of the line and liters in the next line (Example).
To keep that together you would use 5 liters. That's what you use for and nothing else, especially not for layout purpose.
To still answer your question:
HTML is a markup language not a programming language. That means it is descriptive/static and not functional/dynamic. If you try to generate HTML dynamically you would need to use something like PHP or JavaScript.
Just an observation from a novice. If everyone did as purists suggest (i.e.-the right way), then the web would still be using the same coding conventions it was using 30 years ago. People do things, innovate, and create new ways, then new standards, and deprecate others all the time. Just because someone says "spaces are only for separating words...and nothing else" is silly. For many, many years, when people typed letters, they used one space between words, and two spaces between end punctuation and the next sentence. That changed...yeah, things change. There is absolutely nothing wrong with using spaces and non-breaking spaces in ways which assist layout. It is neither useful nor elegant for someone to use a long span with style over and over and over, rather than simple spaces. You can think it is, and your club of do it right folks might even agree. But...although "right", they are also being rather silly about it. Question: Will a page with 3 non-breaking spaces validate? Interesting.

mediawiki - make link evaluation case insensitive

i'm running a small wiki and our users would like an interface they find less confusing. the complaint is that a page titled something like 'Big_news' displays as a redlink if the link is 'Big News' or 'big news' or some other upper/lower case permutation, and they'd like these to appear as normal-coloured links if the page exists. when a user clicks on the link, the appropriate page is displayed correctly, but it would be better to see that the page already exists beforehand.
i've tried to implement solutions such as those presented here, here, and here, but they don't work -- links still display as redlinks on the page. [indeed, i think some of the articles are out of date ; mediawiki 1.27 doesn't seem to have the tables mentioned in them.]
any ideas how i might go about doing this ?
You could look at how $wgCapitalLinks is being used. Chances are, all-lowercase titles will need special casing in the same places where code needs to be branched based on that setting.
You could hook on HtmlPageLinkRendererBegin and use the link target to run a database query to find any case-insensitive matches for the page name (on page title, and it'd have to do this only for internal links), and then replace the target if there's a match.
thanks for the tip, #Sam Wilson. that looks like an interesting function, but unless i miss my guess, i'd have to query the database for every single link in a page -- correct ? if so, i think performance would suffer. anyway, that hook didn't seem to work for me [mostly because my unfamiliarity with mediawiki left me scratching my head...]. the solution i came up with is as follows :
1- add the variable $wgLinksIgnoreCase to your LocalSettings.php file. set this to true if you want link displays to be mapped case-insensitively.
2- modify the file includes/parser/LinkHolderArray.php as follows [diff accurate for wikimedia version 1.29] -
283a284
> global $wgLinksIgnoreCase;
370a373,376
> if (!empty($wgLinksIgnoreCase)) {
> $mapper = array_combine(array_keys($colours), array_keys($colours));
> $mapper = array_change_key_case($mapper);
> }
373a380,381
> if (!empty($wgLinksIgnoreCase) && isset($mapper[strtolower($pdbk)]))
> $pdbk = $mapper[strtolower($pdbk)];
as i say, i'm not very familiar with the software, so if anyone who is familiar with it finds a more elegant solution, feel free to chime in.

Regular expressions - finding and comparing the first instance of a word

I am currently trying to write a regular expression to pull links out of a page I have. The problem is the links need to be pulled out only if the links have 'stock' for example. This is an outline of what I have code wise:
<td class="prd-details">
<a href="somepage">
...
<span class="collect unavailable">
...
</td>
<td class="prd-details">
<a href="somepage">
...
<span class="collect available">
...
</td>
What I would like to do is pull out the links only if 'collect available' is in the tag. I have tried to do this with the regular expression:
(?s)prd-details[^=]+="([^"]+)" .+?collect{1}[^\s]+ available
However on running it, it will find the first 'prd-details' class and keep going until it finds 'collect available', thereby taking the incorrect results. I thought by specifying the {1} after the word collect it would only use the first instance of the word it finds, but apparently I'm wrong. I've been trying to use different things such as positive and negative lookaheads but I cant seem to get anything to work.
Might anyone be able to help me with this issue?
Thanks,
Dan
You need an expression that knows "collect unavailable" is junk. You should be able to use a negative lookahead with your wildcard after the link capture. Something like:
prd-details[^=]+="([^"]+)"(.(?!collect un))+?collect available
This will collect any character after the link that isn't followed by "collect un". This should eliminate capturing the "collect unavailable" chunk along with "collect available".
I tested in C# treating the text as a single line. You may need a slightly different syntax and options depending on your language and regex library.
If you insist on doing this with regex, I recommend a 2-step split-then-check approach:
First, split into each prd-details.
Then, within each prd-details, see if it contains collect available
If yes, then pull out the href
This is easier than trying to do everything in one step. Easier to read, write, and maintain.

How do I match text in HTML that's not inside tags?

Given a string like this:
This is the foo link
... and a search string like "foo", I would like to highlight all occurrences of "foo" in the text of the HTML -- but not inside a tag. In other words, I want to get this:
This is the <b>foo</b> link
However, a simple search-and-replace won't work, because it will match part of the URL in the <a> tag's href.
So, to express the above in the form of a question: How do I restrict a regex so that it only matches text outside of HTML tags?
Note: I promise that the HTML in question will never be anything pathological like:
<img title="Haha! Here are some angle brackets to screw you up: ><" />
Edit: Yes, of course I'm aware that there are complex libraries in CPAN that can parse even the most heinous HTML, and thus alleviate the need for such a regex. On many occasions, that's what I would use. However, this is not one of those occasions, since keeping this script short and simple, without external dependencies, is important. I just want a one-line regex.
Edit 2: Again, I know that Template::Refine::Fragment can parse all my HTML for me. If I were writing an application I would certainly use a solution like that. But this isn't an application. It's barely more than a shell script. It's a piece of disposable code. Being a single, self-contained file that can be passed around is of great value in this case. "Hey, run this program" is a much simpler instruction than, "Hey, install a Perl module and then run this-- wait, what, you've never used CPAN before? Okay, run perl -MCPAN -e shell (preferably as root) and then it's going to ask you a bunch of questions, but you don't really need to answer them. No, don't be afraid, this isn't going to break anything. Look, you don't need to answer every question carefully -- just hit enter over and over. No, I promise, it's not going to break anything."
Now multiply the above across a great deal of users who are wondering why the simple script they've been using isn't so simple anymore, when all that's changed is to make the search term boldface.
So while Template::Refine::Fragment may be the answer to someone else's HTML parsing question, it's not the answer to this question. I just want a regular expression that works on the very limited subset of HTML that the script will actually be asked to parse.
If you can absolutely guarantee that there are no angle brackets in the HTML other than those used to open and close tags, this should work:
s%(>|\G)([^<]*?)($key)%$1$2<b>$3</b>%g
In general, you want to parse the HTML into a DOM, and then traverse the text nodes. I would use Template::Refine for this:
#!/usr/bin/env perl
use strict;
use warnings;
use feature ':5.10';
use Template::Refine::Fragment;
my $frag = Template::Refine::Fragment->new_from_string('<p>Hello, world. This is a test of foo finding. Here is another foo.');
say $frag->process(
simple_replace {
my $n = shift;
my $text = $n->textContent;
$text =~ s/foo/<foo>/g;
return XML::LibXML::Text->new($text);
} '//text()',
)->render;
This outputs:
<p>Hello, world. This is a test of <foo> finding. Here is another <foo>.</p>
Anyway, don't parse structured data with regular expressions. HTML is not "regular", it's "context-free".
Edit: finally, if you are generating the HTML inside your program, and you have to do transformations like this on strings, "UR DOIN IT WRONG". You should build a DOM, and only serialize it when everything has been transformed. (You can still use TR, however, via the new_from_dom constructor.)
The following regex will match all text between tags or outside of tags:
<.*?>(.*?)<.*?>|>(.*?)<
Then you can operate on that as desired.
Try this one
(?=>)?(\w[^>]+?)(?=<)
it matches all words between tags
To strip off the variable size contents from even nested tags you can use this regex that is in fact a mini-regular grammar for that. (note: PCRE machine)
(?<=>)((?:\w+)(?:\s*))(?1)*