Edit Completely changed question after finding that the problem was elsewhere in the application.
I am working on a Heroku client in Flex and am trying to build the authentication tool now. Heroku uses Basic HTTP Authentication so I setup my User class to store an email and password and expose a method that will return the base64 encoded string representation of the email and password seperated by a colon. The encoder, however, cuts off the last 4 characters in the string (tested by encoding the same string through the openssl encoder built into *Nix. The code that I am using to encode the values is as follows:
public function getAuthString():String{
var encoder:Base64Encoder = new Base64Encoder();
encoder.insertNewLines = false;
encoder.encode(email + ':' + password);
trace(email + ':' + password);
trace(encoder.toString());
return encoder.toString();
}
The trace of the email and password together is correct, but the encoder.toString() call returns a string that is short 4 characters (45 characters long instead of 49).
Has anyone else run into this problem before? If so how did you fix it?
The ActionScript implementation is working as expected. The openssl implementation has the assumption of a newline. The extra four characters you are seeing are the newline character.
Related
I want to understand this to parse data from the private chain transaction and get the input data that was sent for a particular transaction, I have tried many decoders but at some point, they fail.
This is the simple smart contract I tried using remix
contract simple{
uint256 deliveryID;
string status;
function stringAndUint(string _status,uint256 _deliveryID){
status=_status;
deliveryID=_deliveryID;
}
}
Input data generated:- 0x3c38b7fd0000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000000000000000000000067374617475730000000000000000000000000000000000000000000000000000
I can interpret the following from the above.
function signature:0x3c38b7fd
_status value:737461747573,
_deliveryID:0c but i dont know why 4 is coming and extra 6 before 737461747573 .
The input to function "stringAndUint " is: "status",12
Can some one help me understand how the input data is generated and packet in a long hex string
Try taking a look here http://solidity.readthedocs.io/en/v0.4.24/abi-spec.html#argument-encoding and here http://solidity.readthedocs.io/en/v0.4.24/abi-spec.html#use-of-dynamic-types
Splitting up the encoding into 32 byte chunks gives:
3c38b7fd (function signature)
0000000000000000000000000000000000000000000000000000000000000040 (the location of the data part of the first parameter, measured in bytes from the start of the arguments block)
000000000000000000000000000000000000000000000000000000000000000c (12)
0000000000000000000000000000000000000000000000000000000000000006 (length of "status". the earlier 0..040 points here)
7374617475730000000000000000000000000000000000000000000000000000 ("status" then zeros padded out to the next multiple of 32 bytes)
What is the encoding used?
Solidity uses a "Contract ABI" spec for encoding.
What's the deal with the extra (hex) 40 and 6?
#Brendan's answer about these values is better than mine, so I'll delete this section. I'll leave the answer posted because the below section is still useful.
Reproducing programatically
There is an ABI-decoding tool in python, called eth-abi, which you can use like so:
from eth_utils import to_bytes
encoded = to_bytes(hexstr="0x0000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000000000000000000000067374617475730000000000000000000000000000000000000000000000000000")
from eth_abi import decode_abi
decoded = decode_abi(['string', 'uint256'], encoded)
assert decoded == (b'status', 12)
I'm using xpdf in an AIR app to convert PDFs to PNGs on the fly. Before conversion I want to get a page count and am using xdf's pdfinfo utility to print to stdout and then parsing that string to get the page count.
My first pass solution: split the string by line breaks, test the resulting array for the "Pages:" string, etc.
My solution works but it feels clunky and fragile. I thought about replacing all the double spaces, doing a split on ":" and building a hash table – but there are timestamps with colons in the string which would screw that up.
Is there a better or smarter way to do this?
protected function processPDFinfo(data:String):void
{
var pageCount:Number = 0;
var tmp:Array = data.split("\n");
for (var i:int = 0; i < tmp.length; i++){
var tmpStr:String = tmp[i];
if (tmpStr.indexOf("Pages:") != -1){
var tmpSub:Array = tmpStr.split(":");
if (tmpSub.length){
pageCount = Number(tmpSub[tmpSub.length - 1]);
}
break;
}
}
trace("pageCount", pageCount);
}
Title: Developing Native Extensions
Subject: Adobe Flash Platform
Author: Adobe Systems Incorporated
Creator: FrameMaker 8.0
Producer: Acrobat Distiller Server 8.1.0
CreationDate: Mon Dec 7 05:45:39 2015
ModDate: Mon Dec 7 05:45:39 2015
Tagged: yes
Form: none
Pages: 140
Encrypted: no
Page size: 612 x 783 pts (rotated 0 degrees)
File size: 2505564 bytes
Optimized: yes
PDF version: 1.4
Use regular expressions like this one for example:
/Pages:\s*(\d+)/g
The first (and only) capturing group is the string of digits you are looking for.
var pattern:RegExp = /Pages:\s*(\d+)/g;
var pageCount:int = parseInt(patern.exec(data)[1]);
I understand about 2% of that (/Pages: /g). It is looking for the string literal Pages: and and then something with spaces wildcard and escaping d+??
I know, regex can be hard. What really helps creating them is if your IDE supports them. There are also online tools like regexr (me first time using version 2 here and it's even better than version 1, very nice!) In general, you want to have a tool that gives you immediate visual feedback of what's being matched.
Below is a screenshot with your text and my pattern in regexr.
You can hover over things and get all kinds of information.
The sidebar to the left is a full fledged documentation on regex.
The optional explain tab goes through the given pattern step by step.
\s* is any amount of whitespace characters and \d+ is at least one numeric digit character.
and returning an array??
This is the As3 part of the story. Once you create a RegExp object with he pattern, you can use exec() to execute it on some String. (not sure why they picked the retarded abbreviation for the method name)
The return value is a little funky:
Returns
Object — If there is no match, null; otherwise, an object with the following properties:
An array, in which element 0 contains the complete matching substring, and other elements of the array (1 through n) contain substrings that match parenthetical groups in the regular expression
index — The character position of the matched substring within the string
input — The string (str)
You have to check the documentation of exec() to really understand this. It's kind of JS style, returning a bunch of variables held together in a generic object that also acts as an array.
This is where the [1] in my example code comes from.
in my company we have a webservice zu send data from very old projects to pretty new ones. The old projects run PHP4.4 which has natively no json_encode method. So we used the PEAR class Service_JSON instead. http://www.abeautifulsite.net/using-json-encode-and-json-decode-in-php4/
Today, I found out, that this class can not deal with multi byte chars because it extensively uses ord() in order to get charcodes from the string and replace the chars. There is no mb_ord() implementation, not even in newer PHP versions. It also uses $string{$index} to access the char at a index, I'm not completely sure if this supports multi byte chars.
//Excerpt from encode() method
// STRINGS ARE EXPECTED TO BE IN ASCII OR UTF-8 FORMAT
$ascii = '';
$strlen_var = $this->strlen8($var);
/*
* Iterate over every character in the string,
* escaping with a slash or encoding to UTF-8 where necessary
*/
for ($c = 0; $c < $strlen_var; ++$c) {
$ord_var_c = ord($var{$c});
//Here comes a switch which replaces chars according o their hex code and writes them to $ascii
we call
$Service_Json = new Service_JSON();
$data = $Service_Json->encode('Marktplatz, Hauptstraße, Endingen');
echo $data; //prints "Marktplatz, Hauptstra\u00dfe, Endinge". The n is missing
We solved this problem by setting up another webservice which receives serialised arrays and returns a json_encoded string. This service runs on a modern mahine, so it uses PHP5.4. But this "solutions is pretty awkward and I should look for a better one. Does anyone have an idea?
Problem description
German umlauts are replaced properly. BUT then the string is cut of at the end because ord returns the wrong chars. . mb_strlen() does not change anything, it gives the same length as strlen in this case.
Input string was "Marktplatz, Hauptstraße, Endingen", the n at the end was cut off. The ß was correctly encoded to \u00df. For every Umlaut it cuts of one more char at the end.
It's also possible the reason is our old database encoding, but the replacement itself works correctly so I guess it's the ord() method.
A colleague found out that
mb_strlen($var, 'ASCII');
solves the problem. We had an older lib version in use which used simple mb_strlen. This fix seems to do the same as your mb_convert_encoding();
Problem is solved now. Thank you very much for your help!
Let's assume I have a Windows Phone app capable of handling deep links with "my-app:" moniker. Given the following sample link:
my-app://do/stuff/?artist=Macklemore%20%26%20Ryan%20Lewis&test=1
We can visible see two query string params, artist = "Macklemore & Ryan Lewis" and test = "1".
If I create a webpage with that link on it and open the page inside Internet Explorer in the phone, this is what gets to the app UriMapper:
/Protocol?encodedLaunchUri=my-app%3A%2F%2Fdo%2Fstuff%2F%3Fartist%3DMacklemore%20%26%20Ryan%20Lewis%26test%3D1
So, it seems that none of the % encoded values got re-encoded, but yet it encoded the & just before the test parameter…
This seems to me like a platform bug, as we won’t be able to distinguish the & chars we get on the UriMapper!
So the question is if anyone know a way of using encoded ampersands (%26) in a windows phone deep link?
I suspect that the issue might be too low level to access with any of the public APIs.
As a starting point, I figured what we do have at our disposal (and that won't require changes on the user side), which is the ability to detect where the & signs are. From that we can determine if the & is part of a value or a query string delimiter. If the latter, we replace it with a random character and split on that character.
Regex rx = new Regex(#"(\b&.*?)=");
The above regex matches only the & that is followed by = (so it would match &test= below but not Macklemore & Ryan Lewis).
We then replace all instances of & that are matched by the above regex with a random character that won't be used elsewhere. For this example, I just used |.
string mapperInput = #"Protocol?encodedLaunchUri=my-app://do/stuff/?artist=Macklemore & Ryan Lewis&test=1";
string final = rx.Replace(mapperInput,
new MatchEvaluator(
new Func<Match, string>(x => x.Value.Replace('&', '|'))
));
We then take that result and put it in a collection.
//skip 2 because the first two matches include the protocol section
var values = final.Split(new char[] { '?', '|' }).Skip(2).ToArray();
The values array now contains two elements (which can be iterated and placed into a Dictionary to have Key-Value access)
artist=Macklemore & Ryan Lewis
test=1
This would have to be tested with various inputs that include characters but from a quick test, it seemed to work fine.
How do I generate an ETag HTTP header for a resource file?
As long as it changes whenever the resource representation changes, how you produce it is completely up to you.
You should try to produce it in a way that additionally:
doesn't require you to re-compute it on each conditional GET, and
doesn't change if the resource content hasn't changed
Using hashes of content can cause you to fail at #1 if you don't store the computed hashes along with the files.
Using inode numbers can cause you to fail at #2 if you rearrange your filesystem or you serve content from multiple servers.
One mechanism that can work is to use something entirely content dependent such as a SHA-1 hash or a version string, computed and stored once whenever your resource content changes.
An etag is an arbitrary string that the server sends to the client that the client will send back to the server the next time the file is requested.
The etag should be computable on the server based on the file. Sort of like a checksum, but you might not want to checksum every file sending it out.
server client
<------------- request file foo
file foo etag: "xyz" -------->
<------------- request file foo
etag: "xyz" (what the server just sent)
(the etag is the same, so the server can send a 304)
I built up a string in the format "datestamp-file size-file inode number". So, if a file is changed on the server after it has been served out to the client, the newly regenerated etag won't match if the client re-requests it.
char *mketag(char *s, struct stat *sb)
{
sprintf(s, "%d-%d-%d", sb->st_mtime, sb->st_size, sb->st_ino);
return s;
}
From http://developer.yahoo.com/performance/rules.html#etags:
By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.
...
If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether.
How to generate the default apache etag in bash
for file in *; do printf "%x-%x-%x\t$file\n" `stat -c%i $file` `stat -c%s $file` $((`stat -c%Y $file`*1000000)) ; done
Even when i was looking for something exactly like the etag (the browser asks for a file only if it has changed on the server), it never worked and i ended using a GET trick (adding a timestamp as a get argument to the js files).
Ive been using Adler-32 as an html link shortener. Im not sure whether this is a good idea, but so far, I havent noticed any duplicates. It may work as a etag generator. And it should be faster then trying to hash using an encryption scheme like sha, but I havent verified this. The code I use is:
shortlink = str(hex(zlib.adler32(link)+(2**32-1)/2))[2:-1]
I would recommend not using them and going for last-modified headers instead.
Askapache has a useful article on this. (as they do pretty much everything it seems!)
http://www.askapache.com/htaccess/apache-speed-etags.html
The code example of Mark Harrison is similar to what used in Apache 2.2. But such algorithm causes problems for load balancing when you have two servers with the same file but the file's inode is different. That's why in Apache 2.4 developers simplified ETag schema and removed the inode part. Also to make ETag shorter usually they encoded in hex:
<inttypes.h>
char *mketag(char *s, struct stat *sb)
{
sprintf(s, "\"%" PRIx64 "-%" PRIx64 "\"", sb->st_mtime, sb->st_size);
return s;
}
or for Java
etag = '"' + Long.toHexString(lastModified) + '-' +
Long.toHexString(contentLength) + '"';
for C#
// Generate ETag from file's size and last modification time as unix timestamp in seconds from 1970
public static string MakeEtag(long lastMod, long size)
{
string etag = '"' + lastMod.ToString("x") + '-' + size.ToString("x") + '"';
return etag;
}
public static void Main(string[] args)
{
long lastMod = 1578315296;
long size = 1047;
string etag = MakeEtag(lastMod, size);
Console.WriteLine("ETag: " + etag);
//=> ETag: "5e132e20-417"
}
The function returns ETag compatible with Nginx. See comparison of ETags form different servers