I'm looking to developing on Solana but...I love to understand what I'm Working on. I've take a look at the documentation, and I can't unuderstand how solana-keygen work. I've try so hard to reproduce the same public address from the same mnemonic but nothing seems to work. Anyone that know exactly how address is generated? If you have your private key how you derive the public without using #solana/web3 library.
import * as Bip39 from 'bip39'
import { Keypair } from "#solana/web3.js";
const seed: Buffer Bip39.mnemonicToSeedSync("title spell imitate observe kidney ready interest border inject quiz misery motor")
const derivedSeed = ed25519.derivePath("m/44'/501'/0'/0'", seed.toString('hex')).key;
const keyPair = Keypair.fromSeed(derivedSeed)
console.log(keyPair.publicKey.toString())
This code work good, so if I go to https://solflare.com/access and try to insert mnemonic I can see the address.
But, in fact, solana-keygen return me this address with that mnemonic: nsaayLiawKPiui9fWYCpRdYkdKeqj2fNn9u8LjauEkn
This is a sample wallet. Feel free to experiment with this parameters.
Please, do not fund this wallet.
How it's possible to get the same address that solana-keygen give me?
I've try to pass all possible parameters on ed25519, pbkd2, but it seems that I'm missing something on the process.
var bip39 = require('bip39');
var ed25519 = require('ed25519-hd-key');
var solanaWeb3 = require("#solana/web3.js");
Get the master seed from the mnemonics:
masterSeed = bip39.mnemonicToSeedSync(seedPhrase, passPhrase);
Get the derived Solana address seed:
var index = 0;
var derivedPath = "m/44'/501'/" + index +"'/0'";
const derivedSeed = ed25519HdKey.derivePath(derivedPath, masterSeed.toString('hex')).key;
Get the keypair:
keypair = solanaWeb3.Keypair.fromSeed(derivedSeed);
Get the wallet address and secret key:
walletAddr = keypair.publicKey.toBase58();
secretKey = keypair.secretKey;
NOTE: To run this in a browser, use browserify as follows:
For example:
Create a file, bip39-input.js:
var bip39 = require('bip39');
Create the browserify bundle:
browserify bip39-input.js -r bip39 -s bip39 \
--exclude=./wordlists/english.json \
--exclude=./wordlists/japanese.json \
--exclude=./wordlists/spanish.json \
--exclude=./wordlists/italian.json \
--exclude=./wordlists/french.json \
--exclude=./wordlists/korean.json \
--exclude=./wordlists/czech.json \
--exclude=./wordlists/portuguese.json \
--exclude=./wordlists/chinese_traditional.json \
-o bip39-bundle.js
Place the bip39-bundle.js file in a script tag.
Actually I ran into the same problem before. I still dont get it upto now. But I use another method to workaround. Basically I used solana-keygen recover 'prompt://?key=0/0' -o file.json to recover the keypair into a json. Then open the file and copy the private key back to the code and use let secretKey = Uint8Array.from(private key) to extract. You may find details from my blog https://medium.com/#lianxiongdi/solana-web3-tutorial-2-connect-your-web3-program-to-wallet-39b335f4b4b .
Your trouble is caused by the fact that solana-keygen new doesn't use ed25519HdKey derivation path but it just slices first 32 bytes from seed and then generate keypair from it. Don't worry, I had the same troubles as you before figuring this out 😉 .
import bip39 from 'bip39'
import solanaWeb3, { Keypair } from '#solana/web3.js'
async function getKeyCreatedBySolanaKeygenFromMnemonic(mnemonic, password) {
const pass = ''
const seed = await bip39.mnemonicToSeed(mnemonic, pass)
let derivedSeed = seed.subarray(0, 32);
const kp = Keypair.fromSeed(derivedSeed);
return kp;
}
async function main() {
const mnemonic = "title spell imitate observe kidney ready interest border inject quiz misery motor"
const kp = await getKeyCreatedBySolanaKeygenFromMnemonic(mnemonic);
console.log('Public key derived from solana-keygen new mnemonic:', kp.publicKey.toBase58())
}
main();
Output:
Public key derived from solana-keygen new mnemonic: nsaayLiawKPiui9fWYCpRdYkdKeqj2fNn9u8LjauEkn
Related
This question already has an answer here:
Fetch error when building Next.js static website in production
(1 answer)
Closed 4 months ago.
hey i guys i have a question i did the build with react and typscript and sanity cms but the problems is when im trying to deploy the build to varcel it keeps rejecting it sayning that FetchError: invalid json response body at https://portfolio2-1-wn3v.vercel.app/api/getExperience reason: Unexpected token T in JSON at position 0 while it works on my local machine it find all the data and everything ... i read that it might be a problem somewhere down the line with getStaticProps or when fetching json and yes i did change the enviroment varibals from base_url in http 3000 to the varcel ones but other than that i have no idea what else i should do .... if anyone has any expirince with this kind of errors ? here is my code for the
`import {Experience} from '../typings'
export const fetchExperiences = async () =>{
const res = await fetch(`${process.env.NEXT_PUBLIC_BASE_URL}/api/getExperience`)
const data = await res.json()
const projects:Experience[] = data.experience
return experience
}`
the getExercise.ts file has all the api request
import type{NextApiRequest,NextApiResponse} from 'next'
import {groq} from 'next-sanity';
import {sanityClient} from '../../sanity';
import {Experience} from '../../typings'
const query = groq`
*[_type == "experience"]{
...,
technologies[]->
}
`;
type Data ={
experience:Experience[]
}
export default async function handler(
req:NextApiRequest,
res:NextApiResponse<Data>,
){
const experience:Experience[]= await sanityClient.fetch(query)
res.status(200).json(JSON.parse(JSON.stringify({experience})))
}
and this is the index.ts file part
export const getStaticProps: GetStaticProps<Props> = async() => {
const experience : Experience[] = await fetchExperiences();
const skills : Skill[] = await fetchSkills();
const projects : Project[] = await fetchProjects();
const socials : Social[] = await fetchSocials();
return{
props:{
experience,
skills,
projects,
socials,
},
revalidate:10
}
}
The error link you see (https://portfolio2-1-wn3v.vercel.app/api/getExperience) is the preview deployment link from vercel. That means, everytime you deploy to vercel it will create this preview link: https:// yourappname-(some unique deployment link).vercel.app.
However, in your api you pass ${process.env.NEXT_PUBLIC_BASE_URL} which will work on your local and probable on production, but it will not on your preview deployments (staging).
To avoid this, unfortunately you cannot only give /api/getExperience as only absolute URL's are supported. Therefore, I suggest the following approach by avoiding the API call as suggested in the [nextjs docs][1]:
you create an experience-queries.ts file in lib/
you add your GROQ query in there
export const getExperiences = groq`*[_type == "experience"]{..., technologies[]->}`
In index.ts, in getStaticProps you call getExperiences
const experiences = await sanityClient.fetch(getExperiences);
Note: Be careful with naming between experience (one single item) and experiences (a list of items) -> make sure you name if as you intend to get them.
[1]: https://nextjs.org/docs/basic-features/data-fetching/get-static-props#write-server-side-code-directly
I have done a lot of research on the internet to learn how exactly ipfs cat and get methods work find and download files from other peers using a CID. I want to fully understand how this process works: "The cat method first searches your own node for the file requested, and if it can't find it there, it will attempt to find it on the broader IPFS network(https://proto.school/regular-files-api/04)".
This is the ipfs source code for cat:
async function * cat (ipfsPath, options = {}) {
ipfsPath = normalizeCidPath(ipfsPath)
if (options.preload !== false) {
const pathComponents = ipfsPath.split('/')
preload(CID.parse(pathComponents[0]))
}
const file = await exporter(ipfsPath, repo.blocks, options)
// File may not have unixfs prop if small & imported with rawLeaves true
if (file.type === 'directory') {
throw new Error('this dag node is a directory')
}
if (!file.content) {
throw new Error('this dag node has no content')
}
yield * file.content(options)
}
I deduce that two important arguments that allow for peer routing and file fetching are repo.blocks and preload. repo.blocks is created during ipfs.create() and then passed as a parameter to ipfs.createCat() which is the method that actually creates the cat method. preload is also created by ipfs.create() and passed as an argument to ipfs.createCat() so that it can be used in ipfs.cat(). What confuses me the most is which one of preload or repo.blocks is actually responsible for CID querying. I analyzed the underlying methods for this part of cat:
const pathComponents = ipfsPath.split('/')
preload(CID.parse(pathComponents[0]))
and learned that this is the part of ipfs.cat that makes http connections to other peers. However, this part:
const file = await exporter(ipfsPath, repo.blocks, options)
includes sub-methods like
const block = await blockstore.get(cid, options);
const node = dagPb.decode(block);
which also seem to be related to CID querying through use of distributed hash tables. blockstore.get did not make use of any methods that seemed to connect to other peers or search for peers that have a CID, but I am still very confused on whether these methods have any relation to CID querying. I highly appreciate any help on how the cat method works under the hood from someone who is an expert in ipfs or at least resources I can use to learn the material myself.
I am dealing with creating AWS API Gateway. I am trying to create CloudWatch Log group and name it API-Gateway-Execution-Logs_${restApiId}/${stageName}. I have no problem in Rest API creation.
My issue is in converting restApi.id which is of type pulumi.Outout to string.
I have tried these 2 versions which are proposed in their PR#2496
const restApiId = apiGatewayToSqsQueueRestApi.id.apply((v) => `${v}`);
const restApiId = pulumi.interpolate `${apiGatewayToSqsQueueRestApi.id}`
here is the code where it is used
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
`API-Gateway-Execution-Logs_${restApiId}/${stageName}`,
{},
);
stageName is just a string.
I have also tried to apply again like
const restApiIdStrign = restApiId.apply((v) => v);
I always got this error from pulumi up
aws:cloudwatch:LogGroup API-Gateway-Execution-Logs_Calling [toString] on an [Output<T>] is not supported.
Please help me convert Output to string
#Cameron answered the naming question, I want to answer your question in the title.
It's not possible to convert an Output<string> to string, or any Output<T> to T.
Output<T> is a container for a future value T which may not be resolved even after the program execution is over. Maybe, your restApiId is generated by AWS at deployment time, so if you run your program in preview, there's no value for restApiId.
Output<T> is like a Promise<T> which will be eventually resolved, potentially after some resources are created in the cloud.
Therefore, the only operations with Output<T> are:
Convert it to another Output<U> with apply(f), where f: T -> U
Assign it to an Input<T> to pass it to another resource constructor
Export it from the stack
Any value manipulation has to happen within an apply call.
So long as the Output is resolvable while the Pulumi script is still running, you can use an approach like the below:
import {Output} from "#pulumi/pulumi";
import * as fs from "fs";
// create a GCP registry
const registry = new gcp.container.Registry("my-registry");
const registryUrl = registry.id.apply(_=>gcp.container.getRegistryRepository().then(reg=>reg.repositoryUrl));
// create a GCP storage bucket
const bucket = new gcp.storage.Bucket("my-bucket");
const bucketURL = bucket.url;
function GetValue<T>(output: Output<T>) {
return new Promise<T>((resolve, reject)=>{
output.apply(value=>{
resolve(value);
});
});
}
(async()=>{
fs.writeFileSync("./PulumiOutput_Public.json", JSON.stringify({
registryURL: await GetValue(registryUrl),
bucketURL: await GetValue(bucketURL),
}, null, "\t"));
})();
To clarify, this approach only works when you're doing an actual deployment (ie. pulumi up), not merely a preview. (as explained here)
That's good enough for my use-case though, as I just want a way to store the registry-url and such after each deployment, for other scripts in my project to know where to find the latest version.
Short Answer
You can specify the physical name of your LogGroup by specifying the name input and you can construct this from the API Gateway id output using pulumi.interpolate. You must use a static string as the first argument to your resource. I would recommend using the same name you're providing to your API Gateway resource as the name for your Log Group. Here's an example:
const apiGatewayToSqsQueueRestApi = new aws.apigateway.RestApi("API-Gateway-Execution");
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
"API-Gateway-Execution", // this is the logical name and must be a static string
{
name: pulumi.interpolate`API-Gateway-Execution-Logs_${apiGatewayToSqsQueueRestApi.id}/${stageName}` // this the physical name and can be constructed from other resource outputs
},
);
Longer Answer
The first argument to every resource type in Pulumi is the logical name and is used for Pulumi to track the resource internally from one deployment to the next. By default, Pulumi auto-names the physical resources from this logical name. You can override this behavior by specifying your own physical name, typically via a name input to the resource. More information on resource names and auto-naming is here.
The specific issue here is that logical names cannot be constructed from other resource outputs. They must be static strings. Resource inputs (such as name) can be constructed from other resource outputs.
Encountered a similar issue recently. Adding this for anyone that comes looking.
For pulumi python, some policies requires the input to be stringified json. Say you're writing an sqs queue and a dlq for it, you may initially write something like this:
import pulumi_aws
dlq = aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=json.dumps({
"deadLetterTargetArn": dlq.arn,
"maxReceiveCount": "3"
})
)
The issue we see here is that the json lib errors out stating type Output cannot be parsed. When you print() dlq.arn, you'd see a memory address for it like <pulumi.output.Output object at 0x10e074b80>
In order to work around this, we have to leverage the Outputs lib and write a callback function
import pulumi_aws
def render_redrive_policy(arn):
return json.dumps({
"deadLetterTargetArn": arn,
"maxReceiveCount": "3"
})
dlq = pulumi_aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=Output.all(arn=dlq.arn).apply(
lambda args: render_redrive_policy(args["arn"])
)
)
I try to create an eth account via RPC in private network.
What I have done so far are:
launch geth node and create private network.
create simple javascript program using web3 1.0.0, typescript
run and get result as below but the account isn't created
Code:
const result = await web3.eth.personal.unlockAccount(senderId, senderPassword, duration)
if (result === true) {
// const newAccountResult = await web3.eth.personal.newAccount('password')
const newAccountResult = await web3.eth.accounts.create('user01')
console.log(newAccountResult)
}
Result:
web3.eth.accounts.create returns the following result
{ address: '0xf10105f862C1cB10550F4EeB38697308c7A290Fc',
privateKey: '0x5cba6b397fc8a96d006988388553ec17a000f7da9783d906979a2e1c482e7fcb',
signTransaction: [Function: signTransaction],
sign: [Function: sign],
encrypt: [Function: encrypt] }
But web3.eth.getAccounts method returns only 1 account.
[ '0xaf0034c41928Db81E570061c58c249f61CFF57f2' ]
Seems web3.eth.accounts.create method has succeeded as the result includes account address and private key.
But I don't understand why web3.eth.getAccounts method doesn't include the created account.
I also checked geth via console, the result is same.
> eth.accounts
["0xaf0034c41928db81e570061c58c249f61cff57f2"]
And eth.personal.newAccount didn't work.
Do I need to do something after web3.eth.accounts.create?
I appreciate any help.
If i got it right, web.eth.accounts.create is a way to create accounts without storing them on the local node, so its basically a way to get a valid keypair on-the-fly without storing anything on the keystore)
web3.eth.personal.newAccount() should be availabel if you have the personal-API activated on your geth node (which is default behavior for ganache, with geth you need to activate it via geth --dev/testnet --rpc --rpcapi eth,web3,personal (note: of course you should be very careful with allowing personal-API on mainnet, make sure that RPC access is restricted so only you/privileged users can access it)
(async () => {
let newAccount = await web3.eth.personal.newAccount();
console.log(newAccount);
let accounts = await web3.eth.getAccounts();
console.log(accounts);
})();
Should give something like
0xb71DCf0191E2B90efCD2638781DE40797895De66
[
...
'0xb71DCf0191E2B90efCD2638781DE40797895De66' ]
Refs https://medium.com/#andthentherewere0/should-i-use-web3-eth-accounts-or-web3-eth-personal-for-account-creation-15eded74d0eb
I am attempting to use cURL with the Pusher API (pusher.com). However I keep getting the response "Invalid JSON provided (could not parse)". Any help would be appreciated, here is my trigger function:
function trigger(name, data, channel)
string_to_sign = "POST\n/apps/"..pusher_app_id.."/events\n"..params
signature = hmac.digest("sha256", string_to_sign, pusher_secret)
md5 = md5.sumhexa('{"name":"foo","channel":"test-channel","data":"{\"some\":\"data\"}"}');
c = curl.new()
c:setopt(curl.OPT_URL, pusher_server..'apps/'..pusher_app_id..'/events'..'?'..params..'&auth_signature='..signature..'&body_md5='..md5)
c:setopt(curl.OPT_POST, true)
c:setopt(curl.OPT_HTTPHEADER, "Content-Type: application/json")
c:setopt(curl.OPT_POSTFIELDS, '{"name":"'..name..'","channel":"'..channel..'","data":"{\"some\":\"data\"}"}')
c:perform()
c:close()
end
If I print the JSON I am putting in OPT_POSTFIELDS and paste it into a json validator, it is indeed completely valid. According to the docs this is the proper usage for /events and my authentication is also working fine.
I had gone back through my function applying the suggestions moteus commented and was able to resolve my problem by fixing the md5 and applying it to the signature. I am also using the luajson module to take care of encoding. This seemed to fix the issue.
function trigger(name, data, channel)
data_table = {
["name"] = name,
["channel"] = channel,
["data"] = data
}
json_data = json.encode(data_table)
md5 = md5.sumhexa(json_data)
string_to_sign = "POST\n/apps/"..pusher_app_id.."/events\n"..params.."&body_md5="..md5
signature = hmac.digest("sha256", string_to_sign, pusher_secret)
c:setopt(curl.OPT_URL, pusher_server..'apps/'..pusher_app_id..'/events'..'?'..params..'&auth_signature='..signature..'&body_md5='..md5)
c:setopt(curl.OPT_POST, true)
c:setopt(curl.OPT_HTTPHEADER, "Content-Type: application/json")
c:setopt(curl.OPT_POSTFIELDS, json_data)
c:perform()
c:close()
end