Get list of commits between two revisions in AWS CodeCommit - aws-sdk

How can I get a list of all commits between two revisions in AWS CodeCommit, using the SDK or AWS CLI?
Essentially, what I need is an AWS CodeCommit way to git log a..b
There's a batch-get-commits method, but that requires a list with all the commit ids.

If you call getCommit, there is a parents field in the response, which you can use to retreive the previous commit.
const AWS = require('aws-sdk')
const codecommit = new AWS.CodeCommit()
const commitIdFrom = 'commit hash'
const commitIdTo = 'commit hash'
const repositoryName = 'your repo name'
let keepRetrievingCommits = true
let commitId = commitIdTo
const commits = []
while (keepRetrievingCommits) {
try {
const params = { commitId, repositoryName }
const ccData = await codecommit.getCommit(params).promise()
const { author, message, parents } = ccData.commit
commits.push({
repositoryName,
commitId,
author: author.email,
message
})
if(parents.length === 1) {
commitId = parents[0]
}
else {
keepRetrievingCommits = false
}
if(commitId === commitIdFrom) { // won't include info of 'commitFrom'
keepRetrievingCommits = false
}
} catch(err) {
console.error(`Error while getting commit details ${commitId} on repo ${repositoryName}: ${JSON.stringify(err)}`)
}
}
// commitsData.forEach(...)

Related

Building a nix derivation within a module

I created the following module services/invidious.nix
{ pkgs, stdenv, ... }:
stdenv.mkDerivation {
name = "invidious";
container = pkgs.dockerTools.buildLayeredImage {
name = "invidious";
contents = [ pkgs.busybox pkgs.bash pkgs.invidious ];
config = {
Cmd = [ "/bin/bash" ];
Env = [];
Volumes = {};
};
};
}
My eventual goal is to have several services in modules and use nix-build to build each of those services as containers, and write the resulting image names to a file:
let
config = import ./config.nix;
pkgs = config.pkgs;
invidious = import ./services/invidious.nix;
in rec {
serviceimages = pkgs.writeText "images.txt" ''
${invidious(pkgs)}
'';
}
and my config.nix just has the pkgs pinned version:
{
# nixos-22.05 / https://status.nixos.org/
pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/d86a4619b7e80bddb6c01bc01a954f368c56d1df.tar.gz") {};
}
However, when I use nix-build, I get the following error:
nix-build services.nix -A serviceimages
these 2 derivations will be built:
/nix/store/dbl3bzc05pssq3q9g8wd2i92xpmwf5bb-invidious.drv
/nix/store/4x31hx9nxcbbksi2hsim08djrsj4h1zh-images.txt.drv
building '/nix/store/dbl3bzc05pssq3q9g8wd2i92xpmwf5bb-invidious.drv'...
unpacking sources
variable $src or $srcs should point to the source
error: builder for '/nix/store/dbl3bzc05pssq3q9g8wd2i92xpmwf5bb-invidious.drv' failed with exit code 1;
last 2 log lines:
> unpacking sources
> variable $src or $srcs should point to the source
For full logs, run 'nix log /nix/store/dbl3bzc05pssq3q9g8wd2i92xpmwf5bb-invidious.drv'.
If I try to pull the full logs using the command given, I get the following:
nix log /nix/store/dbl3bzc05pssq3q9g8wd2i92xpmwf5bb-invidious.drv
error: experimental Nix feature 'nix-command' is disabled; use '--extra-experimental-features nix-command' to override
...and if I enable the experimental feature, I see the following:
nix --extra-experimental-features nix-command log /nix/store/dbl3bzc05pssq3q9g8wd2i92xpmwf5bb-invidious.drv
#nix { "action": "setPhase", "phase": "unpackPhase" }
unpacking sources
variable $src or $srcs should point to the source
If I just try to build the same service in a single file, it successfully builds the image:
let
# nixos-22.05 / https://status.nixos.org/
pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/d86a4619b7e80bddb6c01bc01a954f368c56d1df.tar.gz") {};
in rec {
docker = pkgs.dockerTools.buildLayeredImage {
name = "invidious";
contents = [ pkgs.busybox pkgs.bash pkgs.invidious ];
config = {
Cmd = [ "/bin/sh" ];
Env = [];
Volumes = {};
};
};
results = pkgs.writeText "images.txt" ''
${docker}
'';
}
What am I doing wrong with my attempt to use modules?
I figured it out. I didn't need the mkDerivation. I just need the buildLayeredImage
{ pkgs, ... }:
pkgs.dockerTools.buildLayeredImage {
name = "invidious";
contents = [ pkgs.busybox pkgs.bash pkgs.invidious ];
config = {
Cmd = [ "/bin/bash" ];
Env = [];
Volumes = {};
};
}
The service.nix and config.nix stay the same:
let
config = import ./config.nix;
pkgs = config.pkgs;
invidious = import ./services/invidious.nix;
in rec {
serviceimages = pkgs.writeText "images.txt" ''
${invidious(pkgs)}
'';
}
{
# nixos-22.05 / https://status.nixos.org/
pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/d86a4619b7e80bddb6c01bc01a954f368c56d1df.tar.gz") {};
}

How to make my flutter app read upadated json file instead of old json file?

I made a app where I have to first read local json file and updated some content of it inside app and save that file and without closing the app I want to display the changes by reading updated json file.
I am able to read my json file, save changes in that file but when I try to see my changes without closing the app by reading that json file, it always show my previous data. But if I close my app and open it again It show read new updated file.
How can I show changes by reading updated json file without closing the app??
This is my code:
First I read json file inside initstate:
Future<void> readJson() async {
final String response =
await rootBundle.loadString('jsonfile/primary_values.json');
final data = jsonDecode(response);
var values = PrimaryValueJson.fromJson(data);
setState(() {
if (primaryKey == 'Doctor SSN :') {
widget.primaryIndex = values.doc_ssn;
widget.primaryValue = 'DC0${widget.primaryIndex}';
print(widget.primaryValue);
print('this is readjson');
}
});
}
#override
void initState() {
super.initState();
print("I am doctor init screen");
readJson();
}
And then I increment Doc.ssn by 1 and write it by clicking a button. Function associated with that button is:
_writeJson() async {
print("this is 1st line writejson: ${widget.primaryIndex}");
String response =
await rootBundle.loadString('jsonfile/primary_values.json');
File path = File('jsonfile/primary_values.json');
var data = jsonDecode(response);
var values = PrimaryValueJson.fromJson(data);
final PrimaryValueJson doctor = PrimaryValueJson(
doc_ssn: values.doc_ssn + 1,
phar_id: values.phar_id,
ssn: values.ssn,
);
final update = doctor.toJson();
path.writeAsStringSync(json.encode(update));
print('this is writejson:${doctor.doc_ssn}');
nameController.text = '';
specialityController.text = '';
experienceController.text = '';
widget.primaryIndex = doctor.doc_ssn;
widget.primaryValue = 'DC0${doctor.doc_ssn}';
}
Future<void> insertRecord(context) async {
count = count + 1;
if (nameController.text == '' ||
specialityController.text == '' ||
experienceController.text == '') {
print("Please fill all fields");
} else {
try {
String uri = "http://localhost/hospital_MS_api/insert_doctor.php";
var res = await http.post(Uri.parse(uri), body: {
"Doc_SSN": widget.primaryValue,
"name": nameController.text,
"speciality": specialityController.text,
"experience": experienceController.text,
});
setState(() {
_writeJson();
});
var response = jsonDecode(res.body);
if (response["success"] == "true") {
print("Record Inserted");
} else {
print("Record not inserted");
}
} catch (e) {
print(e);
}
}
}
Assets are read-only. After writing an asset file to a file, read also from that file.
File path = File('jsonfile/primary_values.json');
...
path.writeAsStringSync(json.encode(update));
...
var data = jsonDecode(path.readAsBytesSync());

How to configure deploy_contract for fork Pancakeswap

I woudlike to fork pancakeswap on binance smart chain :
I have actually this contracts :
Caketoken.sol
SyrupBar.sol
MasterChef.sol
Migrations.sol
Timelock.sol
How can I specify the 2_deploy_contract.js to deploy each of this token ?
const CakeToken = artifacts.require("CakeToken");
const SyrupBar = artifacts.require("SyrupBar");
const MasterChef = artifacts.require("MasterChefV2");
let admin = "adress 0X"
module.exports = function(deployer) {
// 1st deployment
deployer.deploy(CakeToken).then(function() {
return deployer.deploy(SyrupBar, CakeToken.address).then(function() {
return deployer.deploy(MasterChef, CakeToken.address, SyrupBar.address, admin, "1000000000000000000", 4021488)
})
})
};

No response when transaction is submitted to sawtooth intkey TP

I am trying to set up a transaction processor with hyperledger sawtooth. I tested my TP with sawtooth 1.0 and it worked fine. But when I used sawtooth 1.1 network, my transactions are not processed. It seems like the request does not reach the TP. I then tried the intkey TP from the sdk and that also has the same problem. I matched the transaction submit process from the documentation but for no good.
Sawtooth network docker file
version: "2.1"
services:
settings-tp:
image: hyperledger/sawtooth-settings-tp:1.1
container_name: sawtooth-settings-tp-default
depends_on:
- validator
entrypoint: settings-tp -vv -C tcp://validator:4004
validator:
image: hyperledger/sawtooth-validator:1.1
container_name: sawtooth-validator-default
expose:
- 4004
ports:
- "4004:4004"
# start the validator with an empty genesis batch
entrypoint: "bash -c \"\
sawadm keygen && \
sawtooth keygen my_key && \
sawset genesis -k /root/.sawtooth/keys/my_key.priv && \
sawadm genesis config-genesis.batch && \
sawtooth-validator -vv \
--endpoint tcp://validator:8800 \
--bind component:tcp://eth0:4004 \
--bind network:tcp://eth0:8800 \
\""
rest-api:
image: hyperledger/sawtooth-rest-api:1.1
container_name: sawtooth-rest-api-default
ports:
- "8008:8008"
depends_on:
- validator
entrypoint: sawtooth-rest-api -C tcp://validator:4004 --bind rest-api:8008
Transaction processor
/**
* Copyright 2016 Intel Corporation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* ------------------------------------------------------------------------------
*/
'use strict'
const { TransactionHandler } = require('sawtooth-sdk/processor/handler')
const {
InvalidTransaction,
InternalError
} = require('sawtooth-sdk/processor/exceptions')
const crypto = require('crypto')
const cbor = require('cbor')
// Constants defined in intkey specification
const MIN_VALUE = 0
const MAX_VALUE = 4294967295
const MAX_NAME_LENGTH = 20
const _hash = (x) =>
crypto.createHash('sha512').update(x).digest('hex').toLowerCase()
const INT_KEY_FAMILY = 'intkey'
const INT_KEY_NAMESPACE = _hash(INT_KEY_FAMILY).substring(0, 6)
const _decodeCbor = (buffer) =>
new Promise((resolve, reject) =>
cbor.decodeFirst(buffer, (err, obj) => (err ? reject(err) : resolve(obj)))
)
const _toInternalError = (err) => {
let message = (err.message) ? err.message : err
throw new InternalError(message)
}
const _setEntry = (context, address, stateValue) => {
let entries = {
[address]: cbor.encode(stateValue)
}
return context.setState(entries)
}
const _applySet = (context, address, name, value) => (possibleAddressValues) => {
let stateValueRep = possibleAddressValues[address]
let stateValue
if (stateValueRep && stateValueRep.length > 0) {
stateValue = cbor.decodeFirstSync(stateValueRep)
let stateName = stateValue[name]
if (stateName) {
throw new InvalidTransaction(
`Verb is "set" but Name already in state, Name: ${name} Value: ${stateName}`
)
}
}
// 'set' passes checks so store it in the state
if (!stateValue) {
stateValue = {}
}
stateValue[name] = value
return _setEntry(context, address, stateValue)
}
const _applyOperator = (verb, op) => (context, address, name, value) => (possibleAddressValues) => {
let stateValueRep = possibleAddressValues[address]
if (!stateValueRep || stateValueRep.length === 0) {
throw new InvalidTransaction(`Verb is ${verb} but Name is not in state`)
}
let stateValue = cbor.decodeFirstSync(stateValueRep)
if (stateValue[name] === null || stateValue[name] === undefined) {
throw new InvalidTransaction(`Verb is ${verb} but Name is not in state`)
}
const result = op(stateValue[name], value)
if (result < MIN_VALUE) {
throw new InvalidTransaction(
`Verb is ${verb}, but result would be less than ${MIN_VALUE}`
)
}
if (result > MAX_VALUE) {
throw new InvalidTransaction(
`Verb is ${verb}, but result would be greater than ${MAX_VALUE}`
)
}
// Increment the value in state by value
// stateValue[name] = op(stateValue[name], value)
stateValue[name] = result
return _setEntry(context, address, stateValue)
}
const _applyInc = _applyOperator('inc', (x, y) => x + y)
const _applyDec = _applyOperator('dec', (x, y) => x - y)
class IntegerKeyHandler extends TransactionHandler {
constructor () {
super(INT_KEY_FAMILY, ['1.0'], [INT_KEY_NAMESPACE])
}
apply (transactionProcessRequest, context) {
return _decodeCbor(transactionProcessRequest.payload)
.catch(_toInternalError)
.then((update) => {
//
// Validate the update
let name = update.Name
if (!name) {
throw new InvalidTransaction('Name is required')
}
if (name.length > MAX_NAME_LENGTH) {
throw new InvalidTransaction(
`Name must be a string of no more than ${MAX_NAME_LENGTH} characters`
)
}
let verb = update.Verb
if (!verb) {
throw new InvalidTransaction('Verb is required')
}
let value = update.Value
if (value === null || value === undefined) {
throw new InvalidTransaction('Value is required')
}
let parsed = parseInt(value)
if (parsed !== value || parsed < MIN_VALUE || parsed > MAX_VALUE) {
throw new InvalidTransaction(
`Value must be an integer ` +
`no less than ${MIN_VALUE} and ` +
`no greater than ${MAX_VALUE}`)
}
value = parsed
// Determine the action to apply based on the verb
let actionFn
if (verb === 'set') {
actionFn = _applySet
} else if (verb === 'dec') {
actionFn = _applyDec
} else if (verb === 'inc') {
actionFn = _applyInc
} else {
throw new InvalidTransaction(`Verb must be set, inc, dec not ${verb}`)
}
let address = INT_KEY_NAMESPACE + _hash(name).slice(-64)
// Get the current state, for the key's address:
let getPromise = context.getState([address])
// Apply the action to the promise's result:
let actionPromise = getPromise.then(
actionFn(context, address, name, value)
)
// Validate that the action promise results in the correctly set address:
return actionPromise.then(addresses => {
if (addresses.length === 0) {
throw new InternalError('State Error!')
}
console.log(`Verb: ${verb} Name: ${name} Value: ${value}`)
})
})
}
}
module.exports = IntegerKeyHandler
SendTransaction
const {createContext, CryptoFactory} = require('sawtooth-sdk/signing')
const cbor = require('cbor')
const {createHash} = require('crypto')
const {protobuf} = require('sawtooth-sdk')
const crypto = require('crypto')
// Creating a Private Key and Signer
const context = createContext('secp256k1')
const privateKey = context.newRandomPrivateKey()
const signer = new CryptoFactory(context).newSigner(privateKey)
const _hash = (x) => crypto.createHash('sha512').update(x).digest('hex').toLowerCase()
// Encoding Your Payload
const payload = {
Verb: 'get',
Name: 'foo',
Value: null
}
const payloadBytes = cbor.encode(payload)
let familyAddr = _hash('intkey').substring(0, 6);
let nameAddr = _hash(payload.Name).slice(-64);
let addr = familyAddr + nameAddr;
console.log(addr);
// Create the Transaction Header
const transactionHeaderBytes = protobuf.TransactionHeader.encode({
familyName: 'intkey',
familyVersion: '1.0',
inputs: [addr],
outputs: [addr],
signerPublicKey: signer.getPublicKey().asHex(),
batcherPublicKey: signer.getPublicKey().asHex(),
dependencies: [],
payloadSha512: createHash('sha512').update(payloadBytes).digest('hex')
}).finish()
// Create the Transaction
const signature = signer.sign(transactionHeaderBytes)
const transaction = protobuf.Transaction.create({
header: transactionHeaderBytes,
headerSignature: signature,
payload: payloadBytes
})
// Create the BatchHeader
const transactions = [transaction]
const batchHeaderBytes = protobuf.BatchHeader.encode({
signerPublicKey: signer.getPublicKey().asHex(),
transactionIds: transactions.map((txn) => txn.headerSignature),
}).finish()
// Create the Batch
const headerSignature = signer.sign(batchHeaderBytes)
const batch = protobuf.Batch.create({
header: batchHeaderBytes,
headerSignature: headerSignature,
transactions: transactions
})
// Encode the Batch(es) in a BatchList
const batchListBytes = protobuf.BatchList.encode({
batches: [batch]
}).finish()
// Submitting Batches to the Validator
const request = require('request')
request.post({
url: 'http://localhost:8008/batches',
body: batchListBytes,
headers: {'Content-Type': 'application/octet-stream'}
}, (err, response) => {
if (err) return console.log(err)
console.log(response.body)
})
There are architectural differences in the Hyperledger Sawtooth when moving from 1.0 to 1.1. One major difference is that the consensus engine is moved outside the validator service. In your docker-compose file, there is no consensus engine component also the validator service is not listening on a port for the consensus engine.
The consensus engine drives the block creation. For example, a timer expiry event in the PoET will cause the validator to create a block, validate, broadcast to other members in the network. Also, a confirmation from the consensus engine will make the validator service to commit the block to the blockchain.
Please find an example docker-compose file with the PoET consensus engine here https://github.com/hyperledger/sawtooth-core/blob/1-1/docker/compose/sawtooth-default-poet.yaml . Additionally you may try out https://github.com/hyperledger/sawtooth-core/blob/1-1/docker/compose/sawtooth-default.yaml for local dev test.
There is an explanation in the Hyperledger Sawtooth FAQ for upgrading from 1.0 version to the version 1.1 https://sawtooth.hyperledger.org/faq/upgrade/#id1 . Please feel free to ask more questions or suggestions to update these documentation.
You can also refer to the official documentation to learn in detail for different versions here https://sawtooth.hyperledger.org/docs/.

Too tidious hooks when querying in REST. Any ideas?

I've just started using feathers to build REST server. I need your help for querying tips. Document says
When used via REST URLs all query values are strings. Depending on the service the values in params.query might have to be converted to the right type in a before hook. (https://docs.feathersjs.com/api/databases/querying.html)
, which puzzles me. find({query: {value: 1} }) does mean value === "1" not value === 1 ? Here is example client side code which puzzles me:
const feathers = require('#feathersjs/feathers')
const fetch = require('node-fetch')
const restCli = require('#feathersjs/rest-client')
const rest = restCli('http://localhost:8888')
const app = feathers().configure(rest.fetch(fetch))
async function main () {
const Items = app.service('myitems')
await Items.create( {name:'one', value:1} )
//works fine. returns [ { name: 'one', value: 1, id: 0 } ]
console.log(await Items.find({query:{ name:"one" }}))
//wow! no data returned. []
console.log(await Items.find({query:{ value:1 }})) // []
}
main()
Server side code is here:
const express = require('#feathersjs/express')
const feathers = require('#feathersjs/feathers')
const memory = require('feathers-memory')
const app = express(feathers())
.configure(express.rest())
.use(express.json())
.use(express.errorHandler())
.use('myitems', memory())
app.listen(8888)
.on('listening',()=>console.log('listen on 8888'))
I've made hooks, which works all fine but it is too tidious and I think I missed something. Any ideas?
Hook code:
app.service('myitems').hooks({
before: { find: async (context) => {
const value = context.params.query.value
if (value) context.params.query.value = parseInt(value)
return context
}
}
})
This behaviour depends on the database and ORM you are using. Some that have a schema (like feathers-mongoose, feathers-sequelize and feathers-knex), will convert values like that automatically.
Feathers itself does not know about your data format and most adapters (like the feathers-memory you are using here) do a strict comparison so they will have to be converted. The usual way to deal with this is to create some reusable hooks (instead of one for each field) like this:
const queryToNumber = (...fields) => {
return context => {
const { params: { query = {} } } = context;
fields.forEach(field => {
const value = query[field];
if(value) {
query[field] = parseInt(value, 10)
}
});
}
}
app.service('myitems').hooks({
before: {
find: [
queryToNumber('age', 'value')
]
}
});
Or using something like JSON schema e.g. through the validateSchema common hook.