How do I pass a database connection to Rocket endpoints and functions? - mysql

My web application has a REST API via Rocket, connects to a MySQL database, and there's a single endpoint. I cannot figure out how to access the database connection inside the controller:
#![feature(decl_macro)]
#[macro_use]
extern crate rocket;
#[macro_use]
extern crate rocket_contrib;
use rocket_contrib::json::Json;
use serde::Serialize;
#[macro_use]
extern crate mysql;
use mysql as my;
use mysql::consts::ColumnType::MYSQL_TYPE_DATE;
use mysql::prelude::Queryable;
use mysql::Pool;
#[get("/orgs")]
fn get_orgs(conn: MyDb) -> Json<Vec<Org>> {
// I need to have the db connection here to pass it to get_all.
let o = match Org::get_all() {
Ok(o) => o,
Err(e) => panic!(e),
};
Json(o)
}
#[derive(Serialize)]
pub struct Org {
pub id: Option<i32>,
pub name: String,
}
fn main() {
let url = "mysql://root:mysql#(localhost:33061)/somedb";
let pool = Pool::new(url)?;
let mut conn = pool.get_conn();
// I need to pass "conn" around, first to get_orgs, then to Org::get_all.
rocket::ignite().mount("/", routes![get_orgs]).launch();
}
impl Org {
fn get_all(mut conn: mysql::Conn) -> Result<Vec<Org>, Err> {
let all_orgs = conn.query_map("SELECT id, name from organization", |(id, name)| Org {
id,
name,
})?;
return match all_orgs() {
Ok(all_orgs) => all_orgs,
Err(e) => e,
};
}
}
My assumption is that #[get("/orgs")] does a bunch of code generation. I found this: https://rocket.rs/v0.4/guide/state/#databases - which looks correct, but I cannot figure out a working example to connect to my MySQL instance via the connection string.
Here are my dependencies:
[dependencies]
rocket = "0.4.2"
rocket_codegen = "0.4.2"
rocket_contrib = "0.4.2"
serde = {version = "1.0", features = ["derive"]}
serde_json = {version = "1.0"}
mysql = "*"
How do I make the MySQL connection and pass it around?

You need to tell Rocket about your database. This is done via Fairings. The document you linked actually includes them in an example:
fn main() {
rocket::ignite()
.attach(LogsDbConn::fairing())
.launch();
}
The important piece above is attach, where the database connection is passed to Rocket as a fairing. Only then can Rocket pass the connection to your route, so that you can use it...

Related

detect an enum at runtime and stringify as keys

playground
I have a bunch of interfaces, at least 2-3 levels nested, where some of the leafs are numbers/strings, etc, but others are (numeric) enums.
I don't want to change this.
Now I want to "serialize" objects that implements my interfaces as JSON. Using JSON.stringify is good for almost all cases, but the enums, that are serialized with their (numerical) value.
I know that it's possible to pass a replacer function to JSON.stringify, but I'm stuck, as I'm not sure how to write a function that detect the structure of my object and replace the enum values with the appropriate names.
example:
enum E { X = 0, Y = 1, Z = 2 }
enum D { ALPHA = 1, BETA = 2, GAMMA = 3 }
interface C { e: E; }
interface B { c?: C; d?: D; }
interface A { b?: B; }
function replacer(this: any, key: string, value: any): any {
return value;
}
function stringify(obj: A): string {
return JSON.stringify(obj, replacer);
}
const expected = '{"b":{"c":{"e":"Y"},"d":"ALPHA"}}';
const recieved = stringify({ b: { c: { e: E.Y }, d: D.ALPHA } });
console.log(expected);
console.log(recieved);
console.log(expected === recieved);
It's not possible to automatically find out which enum was assigned to a field, not even with typescript's emitDecoratorMetadata option. That option can only tell you it's a Number and it will only be emitted on class fields that have other decorators on them.
The best solution you have is to manually add you own metadata. You can do that using reflect-metadata node module.
You'd have to find all enum fields on all of your classes and add metadata saying which enum should be used for serializing that field.
import 'reflect-metadata';
enum E
{
ALPHA = 1,
BETA = 2,
GAMMA = 3,
}
class C
{
// flag what to transform during serialization
#Reflect.metadata('serialization:type', E)
enumField: E;
// the rest will not be affected
number: number;
text: string;
}
This metadata could be added automatically if you can write an additonal step for your compiler, but that is not simple to do.
Then in your replacer you'll be able to check if the field was flagged with this matadata and if it is then you can replace the numeric value with the enum key.
const c = new C();
c.enumField= E.ALPHA;
c.number = 1;
c.text = 'Lorem ipsum';
function replacer(this: any, key: string, value: any): any
{
const enumForSerialization = Reflect.getMetadata('serialization:type', this, key);
return enumForSerialization ? enumForSerialization[value] ?? value : value;
}
function stringify(obj: any)
{
return JSON.stringify(obj, replacer);
}
console.log(stringify(c)); // {"enumField":"ALPHA","number":1,"text":"Lorem ipsum"}
This only works with classes, so you will have to replace your interfaces with classes and replace your plain objects with class instances, otherwise it will not be possible for you to know which interface/class the object represents.
If that is not possible for you then I have a much less reliable solution.
You still need to list all of the enum types for all of the fields of all of your interfaces.
This part could be automated by parsing your typescript source code and extracting the enum types for those enum fields and then saving it in a json file that you can load in runtime.
Then in the replacer you can guess the interface of an object by checking what are all of the fields on the this object and if they match an interface then you can apply enum types that you have listed for that interface.
Did you want something like this? It was the best I could think without using any reflection.
enum E { X = 0, Y = 1, Z = 2 }
enum D { ALPHA = 1, BETA = 2, GAMMA = 3 }
interface C { e: E; }
interface B { c?: C; d?: D; }
interface A { b?: B; }
function replacer(this: any, key: string, value: any): any {
switch(key) {
case 'e':
return E[value];
case 'd':
return D[value];
default:
return value;
}
}
function stringify(obj: A): string {
return JSON.stringify(obj, replacer);
}
const expected = '{"b":{"c":{"e":"Y"},"d":"ALPHA"}}';
const recieved = stringify({ b: { c: { e: E.Y }, d: D.ALPHA } });
console.log(expected);
console.log(recieved);
console.log(expected === recieved);
This solution assumes you know the structure of the object, just as you gave in the example.

thread 'main' panicked at 'assertion failed: self.remaining() >= dst.len()' when trying to connect database MYSQL

So when i try to connect mysql database main tread panics it was working fine with sqlite but when i try to use it with mysql it's not working.
cargo check
is also working fine. The error is located at line 20 where i created a database connection.
my main.rs
use dotenv::dotenv;
use roadoxe::data::AppDatabase;
use structopt::StructOpt;
#[derive(StructOpt, Debug)]
#[structopt(name = "roadoxe")]
struct Opt {
#[structopt(default_value = "mysql://root:root#127.0.0.1:33060/automate")]
connection_string: String,
}
fn main() {
dotenv().ok();
let opt = Opt::from_args();
let rt = tokio::runtime::Runtime::new().expect("failed to spawn tokio runtime");
let handle = rt.handle().clone();
let database = rt.block_on(async move { AppDatabase::new(&opt.connection_string).await });
let config = roadoxe::RocketConfig {
database,
// maintenance,
};
rt.block_on(async move {
roadoxe::rocket(config)
.launch()
.await
.expect("failed to launch rocket server");
})
}
and data mod.rs
use sqlx::MySql;
// use std::str::FromStr;
#[derive(Debug, thiserror::Error)]
pub enum DataError {
#[error("Database Error: {0}")]
Database(#[from] sqlx::Error),
}
pub type AppDatabase = Database<MySql>;
pub type DatabasePool = sqlx::mysql::MySqlPool;
pub type Transaction<'a> = sqlx::Transaction<'a, MySql>;
pub type AppDatabaseRow = sqlx::mysql::MySqlRow;
pub type AppQueryResult = sqlx::mysql::MySqlQueryResult;
pub struct Database<D: sqlx::Database>(sqlx::Pool<D>);
impl Database<MySql> {
pub async fn new(path: &str) -> Self {
let pool = sqlx::mysql::MySqlPoolOptions::new().connect(path).await;
match pool {
Ok(pool) => Self(pool),
Err(e) => {
eprintln!("{}\n", e);
eprintln!(
"If the database has not yet been created, run \n $ sqlx database setup\n"
);
panic!("Failed to connect to database");
}
}
}
pub fn get_pool(&self) -> &DatabasePool {
&self.0
}
}
I have no idea what's happening here fairly new to rust. And thanks for your time.
The first issue was the port it is suppose to be
const PORT = 3306
rather than 33060
and the second issue was the ip address it doesn't work with 127.0.0.1 but localhost seems to work fine just needed to switch
#[structopt(default_value = "mysql://root:root#127.0.0.1:33060/automate")]
with
#[structopt(default_value = "mysql://root:root#localhost:3306/automate")]

Rust Deserializing JSON

I am having trouble deserializing json data sent from my client.
server.rs
use std::collections::HashMap;
use std::sync::{Arc,Mutex};
use tokio::net::{TcpListener, TcpStream};
use tokio::io::{AsyncWriteExt, AsyncReadExt};
use serde_json::{ Value};
/*
The type Arc<T> provides shared ownership of a value of type T, allocated in the heap. Invoking clone on Arc produces a new Arc instance, which points to the same allocation on the heap as the source Arc, while increasing a reference count. When the last Arc pointer to a given allocation is destroyed, the value stored in that allocation (often referred to as “inner value”) is also dropped.
*/
// creating a type alias for user to socket map
// Arc points top
type UserToSocket = Arc<Mutex<HashMap<String,TcpStream>>>;
#[tokio::main]
async fn main() {
let listener = TcpListener::bind("127.0.0.1:9090").await;
// creating a threadsafe hashmap mutex
let local_db: UserToSocket = Arc::new(Mutex::new(HashMap::new()));
let listener = match listener{
Result::Ok(value) => {value},
Result::Err(_)=> {panic!("ERROR OCCURED")},
};
println!("[+] Listener has been started");
loop {
// now waiting for connection
println!("[+] Listening for connection");
let (socket,addr) = listener.accept().await.unwrap();
println!("[+] A connection accepted from {:?}, spawwning a new task for it",addr);
// cloning does not actually clone, but rather just increases counter to it
let ld = Arc::clone(&local_db);
// spawning a new task
tokio::spawn(
async move {
handler(socket,ld).await;
}
);
}
}
// a handler for new connection
async fn handler(mut socket: TcpStream, _db: UserToSocket) {
socket.write_all(b"[+] Hello Friend, Welcome to my program\r\n").await.unwrap();
let mut buf = vec![0; 1024];
loop {
// n holds the number of bytes read i think
match socket.read(&mut buf).await {
Ok(0) => {
println!("Client Closed connection");
return;
}
// getting some data
Ok(_n) => {
// ownership is transferred so need to clone it
let bufc = buf.clone();
// unmarshalling json
//let parsed:Value = serde_json::from_slice(&bufc).unwrap();
// obtaining string
match String::from_utf8(bufc) {
Ok(val) => {
println!("[+] So the parsed value is {}",val);
//let temp = val.as_str();
let parsed:Value = serde_json::from_str(&val).unwrap();
println!("{:?}",parsed);
socket.write_all(b"So yeah thanks for sending this\r\n").await.unwrap();
continue;
}
Err(err) => {
println!("ERROR Could not convert to string {:?}",err);
continue;
}
};
//socket.write_all(b"Vnekai bujena\r\n").await.unwrap();
}
Err(_) => {
println!("Unhandeled error occured");
return;
}
}
}
}
client.rs
use tokio::net::{TcpStream};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use std::{thread,time};
#[tokio::main]
async fn main() {
let sleep_time = time::Duration::from_secs(2);
let socket = TcpStream::connect("127.0.0.1:9090").await;
let mut socket = match socket {
Ok(v) => {
println!("[+] Successfully connected");
v
}
Err(_) => {
println!("ERROR could not connect to the server");
std::process::exit(-1);
}
};
let mut buf = vec![0;1024];
//let mut user_input = String::new();
loop {
thread::sleep(sleep_time);
match socket.read(&mut buf).await {
Ok(0) => {
println!("[+] Connection with server has been closed");
std::process::exit(1);
}
Ok(_n) => {
let bc = buf.clone();
let res = String::from_utf8(bc).unwrap();
println!("[+] Server responded with {}",res);
}
Err(_) => {
panic!("[-] Some fatal error occured");
}
}
println
!("You want to say: ");
/*let _v = match io::stdin().read_line(&mut user_input){
Ok(val) => {val}
Err(_) => panic!("ERROR"),
};*/
let val = "{\"name\": \"John Doe\",\"age\": 43,\"phones\": [\"+44 1234567\",\"+44 2345678\"]}\r\n";
socket.write(val.as_bytes()).await.unwrap();
}
}
When I send json data to server, I receive an error.
thread 'tokio-runtime-worker' panicked at 'called Result::unwrap() on an Err value: Error("trailing characters", line: 2, column: 1)', src\bin\simple_server.rs:79:71
This error does not occur when I try to desterilize the json string directly. It only occurs when I send the data through network.
Since your JSON is newline-terminated, you should use something like read_line() to read it. (And you should never send a formatted JSON, because it will contain newlines - but serde_json is creating non-formatted JSON by default.)
For example, this compiles and should work as intended
use serde_json::Value;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufStream};
use tokio::net::{TcpListener, TcpStream};
type UserToSocket = Arc<Mutex<HashMap<String, TcpStream>>>;
// ... main unchanged from your implementation ...
async fn handler(socket: TcpStream, _db: UserToSocket) {
let mut socket = BufStream::new(socket);
socket
.write_all(b"[+] Hello Friend, Welcome to my program\r\n")
.await
.unwrap();
socket.flush().await.unwrap();
let mut line = vec![];
loop {
line.clear();
if let Err(e) = socket.read_until(b'\n', &mut line).await {
println!("Unhandled error occured: {}", e);
return;
}
if line.is_empty() {
println!("Client Closed connection");
return;
}
println!(
"[+] So the received value is {}",
String::from_utf8_lossy(&line)
);
let parsed: Value = serde_json::from_slice(&line).unwrap();
println!("{:?}", parsed);
socket
.write_all(b"So yeah thanks for sending this\r\n")
.await
.unwrap();
socket.flush().await.unwrap();
continue;
}
}

How to hint the type of a function I do not control?

When parsing a JSON-formatted string I get a linter error:
let mqttMessage = JSON.parse(message.toString())
// ESLint: Unsafe assignment of an `any` value. (#typescript-eslint/no-unsafe-assignment)
I control the content of message so I would like to tell TS that what comes out of JSON.parse() is actually an Object. How can I do that?
Note: I could silence the warning, but I would like to understand if there is a better way to approach the problem.
The problem is that JSON.parse returns an any type.
That's fair enough right - TypeScript doesn't know if it's going to parse out to a string, a number, or an object.
You have a linting rule saying 'Don't allow assigning variables as any'.
So yeah, you could coerce the result of your JSON.parse
type SomeObjectIKnowAbout = {
};
const result = JSON.parse(message.toString()) as SomeObjectIKnowAbout;
What I tend to like doing in this scenario is create a specific parsing function, that will assert at runtime that the obj really is of the shape you are saying, and will do the type casting to you can treat it while you're writing your code as that object.
type SomeObjectIKnowAbout = {
userId: string;
}
type ToStringable = {
toString: () => string;
}
function parseMessage(message: ToStringable ) : SomeObjectIKnowAbout {
const obj = JSON.parse(message.toString()); //I'm not sure why you are parsing after toStringing tbh.
if (typeof obj === 'object' && obj.userId && typeof obj.userId === 'string') {
return obj as SomeObjectIKnowAbout;
}
else {
throw new Error ("message was not a valid SomeObjectIKnowAbout");
}
}
JSON.parse isn't generic, so we can't supply a generic argument to do it.
You have a couple of options.
The simple thing is that since JSON.parse returns any, you can just define the type of what you're assigning it to:
let mqttMessage: MQTTMessage = JSON.parse(message.toString());
(I've used MQTTMessage as a stand-in for the appropriate type.)
That may not be typesafe enough for everyone, though, since it makes the assumption that the string defines what you expect it to define. And it has the problem that if you do it elsewhere, you repeat the assumption.
Instead, you could define a function:
function parseMQTTMessageJSON(json: string): MQTTMessage {
const x: object = JSON.parse(json);
if (x && /*...appropriate checks for properties here...*/"someProp" in x) {
return x as MQTTMessage;
}
throw new Error(`Incorrect JSON for 'MQTTMessage' type`);
}
Then your code is:
let mqttMessage = parseMQTTMessageJSON(message.toString());
As an alternative to type assertions and runtime wrapper functions, you can utilize declaration merging to augment the global JSON object with a generic overload for the parse method. This will allow you to pass through the expected type and give you improved IntelliSense in case you use a reviver when parsing:
interface JSON {
parse<T = unknown>(text: string, reviver?: (this: any, key: keyof T & string, value: T[keyof T]) => unknown): T
}
type Test = { a: 1, b: "", c: false };
const { a, b, c } = JSON.parse<Test>(
"{\"a\":1,\"b\":\"\",\"c\":false}",
//k is "a"|"b"|"c", v is false | "" | 1
(k,v) => v
);
Or, if you are relying on declaration files to augment global interfaces:
declare global {
interface JSON {
parse<T = unknown>(text: string, reviver?: (this: any, key: keyof T & string,
value: T[keyof T]) => unknown): T
}
}
Playground

Optionally `Take` records from a CSV

I'm using the Rust csv crate to read CSV files. I want to create the option for the user to take x first records from the CSV.
Given a function like fn read_records(csv_reader: csv::Reader, max_records: Option<usize>) -> ?, I want to do the below:
use std::fs::File;
use std::io::BufReader;
use csv as csv_crate;
use self::csv_crate::StringRecordsIntoIter;
/// Read a csv, and print the first n records
fn read_csv_repro(
mut file: File,
max_read_records: Option<usize>,
) {
let mut csv_reader = csv::ReaderBuilder::new()
.from_reader(BufReader::new(file.try_clone().unwrap()));
let records: Box<StringRecordsIntoIter<std::io::BufReader<std::fs::File>>> = match max_read_records {
Some(max) => {
Box::new(csv_reader.into_records().take(max).into_iter())
},
None => {
Box::new(csv_reader.into_records().into_iter())
}
};
for result in records
{
let record = result.unwrap();
// do something with record, e.g. print values from it to console
let string: Option<&str> = record.get(0);
println!("First record is {:?}", string);
}
}
fn main() {
read_csv_repro(File::open("csv_test.csv").unwrap(), Some(10));
}
(gist)
I'm struggling with getting my code to work, with the below error from the compiler:
error[E0308]: mismatched types
--> src/main.rs:18:22
|
18 | Box::new(csv_reader.into_records().take(max).into_iter())
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `csv::reader::StringRecordsIntoIter`, found struct `std::iter::Take`
|
= note: expected type `csv::reader::StringRecordsIntoIter<_>`
found type `std::iter::Take<csv::reader::StringRecordsIntoIter<_>>`
How can I get the above code to work?
While Nate's answer works for this specific case, the more general solution here is to use trait objects. My impression is that this is what you were intending to do by using Box here. Otherwise, in Nate's solution, the use of Box is completely superfluous.
Here is code that uses trait objects without needing to do take(std::usize::MAX) (using Rust 2018):
use std::fs::File;
use std::io::BufReader;
/// Read a csv, and print the first n records
fn read_csv_repro(
file: File,
max_read_records: Option<usize>,
) {
let csv_reader = csv::ReaderBuilder::new()
.from_reader(BufReader::new(file.try_clone().unwrap()));
let records: Box<Iterator<Item=csv::Result<csv::StringRecord>>> =
match max_read_records {
Some(max) => {
Box::new(csv_reader.into_records().take(max).into_iter())
},
None => {
Box::new(csv_reader.into_records().into_iter())
}
};
for result in records
{
let record = result.unwrap();
// do something with record, e.g. print values from it to console
let string: Option<&str> = record.get(0);
println!("First record is {:?}", string);
}
}
fn main() {
read_csv_repro(File::open("csv_test.csv").unwrap(), Some(10));
}
You have to take(std::usize::MAX) when max_records is None. It's annoying, but both iterators have to have the same type to be stored in the same variable. Also, the .intoIter()'s that you added have no effect, as you were calling them on iterators.
fn read_csv_repro(file: File, max_read_records: Option<usize>) {
let mut csv_reader = csv::Reader::from_reader(BufReader::new(file));
let records: Box<std::iter::Take<StringRecordsIntoIter<std::io::BufReader<std::fs::File>>>> = match max_read_records {
Some(max) => {
Box::new(csv_reader.into_records().take(max))
},
None => {
Box::new(csv_reader.into_records().take(std::usize::MAX))
}
};
}