AWS(AwsCrypto) encryption result vary every time - aws-sdk

The returned string vary every when I execute below code. Is there something I can config so that return fixed result?
final AwsCrypto crypto = new AwsCrypto();
new String(Base64.getEncoder().encode(crypto.encryptData(masterKeyProvider, EXAMPLE_DATA).getResult()))

In the SDK, AWS have add some signature which is random and can not be removed, so the encrypted string is not constant.
https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/algorithms-reference.html

Related

Chainlink Core Adapter Path Issue: httpGet uint256 returning 0 from treasury.gov API

I am trying to connect to an API with Chainlink to get a uint from the URL in the request bellow. Problem is, every time the "volume" value comes back 0. I have a feeling the issue is one of two things:
The oracle doesn't like accessing arrays. I've tried "data[0]" as well as "data.0". Both work jsonPath which is on the Docs page.
The API is returning a string instead of a number (as the number is wrapped in quotes). I've tried a bytes32 job as well to only get back 0x0. Also other StackOverflow posts show oracles reading string numbers as numbers.
The following snippets of code are the only changes made to the "deploy on remix" code shown here in Chainlink Docs: https://docs.chain.link/docs/make-a-http-get-request.
request.add("get", "https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v2/accounting/od/avg_interest_rates?sort=-record_date");
request.add("path", "data.0.avg_interest_rate_amt");
The contracts are being deployed on Kovan through Remix/Metamask with plenty of link funding the contract. What could I be doing wrong?
There are a couple of issues:
The response is too large so the node just stops at the HttpGet task. I've tested it on my node and here's the exact error I'm getting: HTTP response too large, must be less than 32768 bytes. If you can influence this, that would be great. Otherwise, you'll need to have your own node that will return a shorter response that matches the above limitations.
The result should have only whole numbers, Solidity doesn't understand decimal points, instead, it uses WEI. That's why you need to multiply the result by at least 100, 10^18 is a standard so I'd go with that. The following piece should work for you:
function requestData(string memory _id, string memory _field) public {
Chainlink.Request memory request = buildChainlinkRequest(jobId, address(this), this.fulfillData.selector);
request.add("get", "https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v2/accounting/od/avg_interest_rates?sort=-record_date");
string[] memory path = new string[](3);
path[0] = "data";
path[1] = _id;
path[2] = _field;
request.addStringArray("path", path);
int timesAmount = 10**18;
request.addInt("times", timesAmount);
sendChainlinkRequestTo(oracle, request, fee);
}
I also added _id and _field as function arguments to query any field of any object. Note, this will only work if you can figure out how to get a shorter response.

How to add back comments/whitespaces in translator using the Antlr4's visitor model

I'm currently writing a TSQL (Sybase/Microsoft SQL) to MySQL translator using the ANTLR4 visitor approach.
I'm able to push comments and whitespaces to different channels so that I can use that information later.
What's not super clear is:
how do I get the data back?
and more importantly how do I plug the comments and whitespaces back into my translated MySQL code?
Re: #1, this seems to work to get the list of all tokens including the comments/whitespaces:
public static List<Token> getHiddenTokensFromString(String sqlIn, int hiddenChannel) {
CharStream charStream = CharStreams.fromString(sqlIn);
CaseChangingCharStream upper = new CaseChangingCharStream(charStream, true);
TSqlLexer lexer = new TSqlLexer(upper);
CommonTokenStream commonTokenStream = new CommonTokenStream(lexer, hiddenChannel);
commonTokenStream.fill();
List<Token> hiddenTokens = commonTokenStream.getTokens();
return hiddenTokens;
}
Re #2, what makes it particularly challenging is that as part of the translation, lines of SQL have to be moved around, some lines removed and some lines added.
Any help will be greatly appreciated.
Thanks.
The ANTLR4 lexer creates a number of tokens, each with an index (a running number). Provided you didn't just skip a token, all tokens are available for later inspection, once the parsing step is done, regardless of their channels (the channel is actually just a number property on a token).
So, given you have a token you want to translate, get its index and then ask the token stream for the tokens with the next smaller index or next higher index. These are usually the hidden whitespaces.
Once you have the whitespace token use its start and stop index to get the original text from the char stream. And since you know where you are in the translation process when you do that, it should be easy to know where to insert the original text.

Do Couchbase reactive clients guarantee order of rows in view query result

I use Couchbase Java SDK 2.2.6 with Couchbase server 4.1.
I query my view with the following code
public <T> List<T> findDocuments(ViewQuery query, String bucketAlias, Class<T> clazz) {
// We specifically set reduce false and include docs to retrieve docs
query.reduce(false).includeDocs();
log.debug("Find all documents, query = {}", decode(query));
return getBucket(bucketAlias)
.query(query)
.allRows()
.stream()
.map(row -> fromJsonDocument(row.document(), clazz))
.collect(Collectors.toList());
}
private static <A> A fromJsonDocument(JsonDocument saved, Class<A> clazz) {
log.debug("Retrieved json document -> {}", saved);
A object = fromJson(saved.content(), clazz);
return object;
}
In the logs from the fromJsonDocument method I see that rows are not always sorted by the row key. Usually they are, but sometimes they are not.
If I just run this query in browser couchbase GUI, I always receive results in expected order. Is it a bug or expected that view query results are not sorted when queried with async client?
What is the behaviour in different clients, not java?
This is due to the asynchronous nature of your call in the Java client + the fact that you used includeDocs.
What includeDocs will do is that it will weave in a call to get for each document id received from the view. So when you look at the asynchronous sequence of AsyncViewRow with includeDocs, you're actually looking at a composition of a row returned by the view and an asynchronous retrieval of the whole document.
If a document retrieval has a little bit of latency compared to the one for the previous row, it could reorder the (row+doc) emission.
But good news everyone! There is a includeDocsOrdered alternative in the ViewQuery that takes exactly the same parameters as includeDocs but will ensure that AsyncViewRow come in the same order returned by the view.
This is done by eagerly triggering the get retrievals but then buffering those that arrive out of order, so as to maintain the original order without sacrificing too much performance.
That is quite specific to the Java client, with its usage of RxJava. I'm not even sure other clients have the notion of includeDocs...

Date Issue when migrating code from WCF to Web API

I'm working on a iOS application, that used to work using WCF.
We're changing this product to use MVC Web API instead of WCF.
I'm facing a problem with the dates! they must bebin JSON format such like
/Date(1373476260000-0600)/
But what is returned actually is of this format
/Date(1379484000000)/
which is not accepted by iOS controller and produces the default date value (like if it's null and it's just initializing it to the default value (12/31/1969))
I've tried to parse the date to the wanted JSON format date string, but it resulted in an exception because it's expecting a DateTime object instead.
I've also tried to add the following line:
GlobalConfiguration.Configuration.Formatters.JsonFormatter. SerializerSettings.DateParseHandling = Newtonsoft.Json.DateParseHandling.DateTimeOffset;
to the WebApiConfig.cs file but it's not working, then I've tried to add it to the AttributeRoutingHttpConfig.cs file then to the Global.asax but no response!
Then I've tried:
var appXmlType = GlobalConfiguration.Configuration.Formatters.XmlFormatter.SupportedMediaTypes.FirstOrDefault(t => t.MediaType == "application/xml");
GlobalConfiguration.Configuration.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType);
ValueProviderFactories.Factories.Add(new JsonValueProviderFactory());
also added them to the 3 files mentioned above but it didn't work!
Any ideas how to solve this?
P.S: I only have access to the Web API code! I can't alter the iOS code!
Thanks.
First, be sure you have set:
GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings
.DateFormatHandling = Newtonsoft.Json.DateFormatHandling.MicrosoftDateFormat;
Otherwise you will get the ISO8601 format instead of the Microsoft format. (ISO8601 is much better, but you said you can't change the iOS app.)
Then, you need to realize that for DateTime values, the .Kind has an effect on how the serialization works. If you have one with DateTimeKind.Utc, then it will not contain an offset because that's how this particular format works.
If you want to ensure that an offset is always produced, then use the DateTimeOffset value instead. This will provide an offset of +0000 for UTC.
For example:
var settings = new JsonSerializerSettings
{
DateFormatHandling = DateFormatHandling.MicrosoftDateFormat
};
var dt = DateTimeOffset.UtcNow;
var json = JsonConvert.SerializeObject(dt, settings);
Debug.WriteLine(json); // "\/Date(1383153418477+0000)\/"
But you need to be very careful with this approach that all consumers honor the offset. For example, if a client receives this and parses it into a DateTime using WCF's DataContractJsonSerializer, there's a known bug that any offset will be treated as if it was the local time of the receiving computer, regardless of what the value of that offset actually is.
If at all possible, you should switch over both the server and the application to use ISO8601 formatting instead.
It seems the time zone portion of the date gets lost - try setting this in WebApiConfig:
GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings.DateTimeZoneHandling = Newtonsoft.Json.DateTimeZoneHandling.Local;

How does the DBInputFormat work in case of MYSQL?

When running a map reduce program on a DB like MYSQL, I just wondered whether the query is fired on the database first and then the resultset is obtained and then the splits are created to be operated by the individual mappers each taking a split.
I believe it first retrieves all the records and then create the logical splits as you may see from the setInput()'s signature:
public static void setInput(JobConf job,
Class<? extends DBWritable> inputClass,
String inputQuery,
String inputCountQuery)
It gets the inputCountQuery which makes hadoop decide on the number of mappers and how many records per mapper to process.
Also read the Limitations of the InputFormat section here.