Store long strings with Squeryl - mysql

I'd like to use VARCHAR(255) or TEXT MySQL data type to store a name of scientific article.
Squeryl creates VARCHAR(128) fields to store strings. How do I configure it to use larger fields?

From http://squeryl.org/schema-definition.html
object Library extends Schema {
...
...
on(borrowals)(b => declare(
b.numberOfPhonecallsForNonReturn defaultsTo(0),
b.borrowerAccountId is(indexed),
columns(b.scheduledToReturnOn, b.borrowerAccountId) are(indexed)
))
on(authors)(s => declare(
s.email is(unique,indexed("idxEmailAddresses")), //indexes can be named explicitely
s.firstName is(indexed),
**s.lastName is(indexed, dbType("varchar(255)")),** // the default column type can be overriden
columns(s.firstName, s.lastName) are(indexed)
))
}

Related

postgres store reference to field in json

It is possible to store json in postgres using the json data type. Check this tutorial for an introduction: http://www.postgresqltutorial.com/postgresql-json/
Consider I am storing the following json in such a field:
{
"address": {
"street1": "123 seasame st"
}
}
I want a to store separately a reference to the street field. For example, I might have another object which is using data from this json structure and wants to store a reference to where it got the data. Maybe something like this:
class Product():
__tablename__ = 'Address'
street_1 = Column(String)
data_source = ?
Now I could make data_source a string and just store namespaces like address.street, but if I did this postgres has no idea what that means. Working with that in queries would mean parsing the string and other inefficient stuff. Does postgres support referring to fields stored inside json data structures?
This question is related to JSON foreign keys in PostgreSQL , but in this case I don't necessarily want a fk relationship. I just want to create a reference, which is not necessarily enforced in the way a fk is.
update:
To be more clear, I want to reference the location of something in the json structure on another attribute and store that reference in a column. In the below code, Address.data_source is a reference to the location of the street data (for example address.street1 in this case)
class Address():
__tablename__ = 'Address'
street_1 = Column(String)
sample_id = Column(Integer, ForeignKey('DataSample.uid'))
data_source = ?
class DataSample():
__tablename__ = 'DataSample'
uid = Column(Integer, primary_key=True)
data = Column(JSONB)
body = {
"address": {
"street1": "123 seasame st"
}
}
datasample = DataSample(data=body)
address = Address(street_1=datasample.data['address']['street_1'],
sample_id=datasample.uid,
data_source=?)
As clarified, the question is seeking a way to flexibly specify a path within a JSON object of a particular record. Keys are being handled in normal columns. Constraints on JSONB fields are not available, and there is no specific support for specifying paths within JSON objects.
I worked with the following in SQL Fiddle using PostgreSQL 9.6:
CREATE TABLE datasample (
id integer PRIMARY KEY,
data jsonb
);
CREATE TABLE address (
id integer PRIMARY KEY,
street_1 text,
sample_id integer REFERENCES datasample (id),
data_source text
);
INSERT INTO datasample(id, data)
VALUES (1, '{"address":{"street_1": "123 seasame st"}}');
INSERT INTO address(id,street_1, sample_id, data_source)
VALUES (1,'123 seasame st',1,'datasample.data->''address''->>''street''');
A typical lookup of the street address (needed to retrieve street_1) would resemble:
SELECT datasample.data->'address'->>'street_1'
FROM datasample
WHERE id=1;
There is no special postgres type for identifying columns. Strings are the closest available and you will need to retrieve the string (or array of strings, or object containing strings, if one of those simplifies parsing) and use it to build the query. In tbe first code block, I stored it as the (escaped) fragment of query - 'datasample.data->''address''->>''street'''. Though longer, it would require only retrieval and unescaping to use in a new custom query. I did not find a way to use the string as a fragment within the same SQL statement, though it might be possible to combine it with other bits of text to form a full statement that could be run through EXECUTE.

How to pass user defined variable as argument to function/aggregate in Cassandra 3.0

Table/Type Structure:
Create type typ_pks_details(
tpks_value Text,
tpks_date Text,
tpks_comp_flg Text
);
Create Table pk_summary (
pks_nbr_key text,
pks_type_value text,
pks_type_character text,
pks_details map<text, frozen<typ_pks_details>>,
PRIMARY KEY (pks_nbr_key,pks_type_val)
);
Function/Aggregate:
Create FUNCTION sfunc_compute_value (state map<Text,Text>
,ps_type_val Text
,ps_type_char Text
,pm_pks_details map<text, frozen<typ_pks_details>>)
CALLED ON NULL INPUT
RETURNS map<Text,Text>
LANGUAGE java
AS '
......<<Other code goes here>>
//Clarification 2
//How to traverse typ_pks_details to fetch data
return state;';
Create AGGREGATE compute_value
(Text
,Text
,map<text, frozen<typ_attr_dtls>>
//Clarification 1
//How to pass map column pk_summary.pks_details map
)
SFUNC sfunc_compute_value
STYPE map<Text,Text>
INITCOND {}
;
Query:
Select compute_value(pks_type_value,pks_type_character,pks_details) As
Summary
From pk_summary
Where pks_nbr_key='100' Allow Filtering;
Question:
Required to pass a user defined column (pks_details) (Clarifcation 1 in above snipet) in query to aggregate/function and compute the values (Clarifcation 2 in above snipet).
Please note the scenario is explained above and not the original tables are revealed.

Deedle - how to use 'ParseExact' within the Frame.ReadCsv schema

I have a CSV file of data in the form
21.06.2016 23:00:00.349, 153.461, 153.427
21.06.2016 23:00:00.400, 153.460, 153.423
etc
The initial step of creating a frame involves the optional inclusion of a 'schema' to specify or rename column heads and specify types:
let df = Frame.ReadCsv(__SOURCE_DIRECTORY__ + "/data/GBPJPY.csv", hasHeaders=true, inferTypes=false, schema="TS (DateTimeOffset), Bid (float(3)), Ask (float(3))")
I would like to specify the first column of string values to be ParseExact'ed to DateTimeOffset of the format
"dd.mm.yyyy HH:mm:ss.fff"
(I'm assuming the use of the setting System.Globalization.CultureInfo.InvariantCulture).
How do I express the schema such that it will parse the datetime string in that first Frame.ReadCsv("file.csv", schema = ........ )? Or is this not possible to accomplish within the schema statement?

Avoiding two distinct domain models with Spring Boot and Jackson

I'm designing a Spring Boot REST API that will be backed by MySQL. It has occurred to me that I want, effectively, two separate models for all my domain objects:
Model 1: Used between the outside world (REST clients) and my Spring REST controllers; and
Model 2: The entities used internally between by Spring Boot app and the MySQL database
For instance I might have a contacts table for holding personal/contact info:
CREATE TABLE contacts (
contact_id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
contact_ref_id VARCHAR(36) NOT NULL,
contact_first_name VARCHAR(100) NOT NULL,
...many more fields
);
and the respective Spring/JPA/Hibernate entity for it might look like:
// Groovy pseudo-code!
#Entity
class Contact {
#Id
#Column(name = "contact_id")
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
#Column(name = "contact_ref_id")
UUID refId
#Column(name = "contact_first_name")
String firstName
// ...etc.
}
If I only had a single model paradigm, then when Jackson goes to serialize a Contact instance (perhaps fetched back from the DB) into JSON and send it back to the client, they'd see JSON that looks like:
{
"id" : 45,
"refId" : "067e6162-3b6f-4ae2-a171-2470b63dff00",
"firstName" : "smeeb",
...
}
Nothing like exposing primary keys to the outside world! Instead, I'd like the serialized JSON to omit the id field (as well as others). Another example might be a lookup/reference table like Colors:
# Perhaps has 7 different color records for ROYGBIV
CREATE TABLE colors (
color_id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
color_name VARCHAR(20) NOT NULL,
color_label VARCHAR(20) NOT NULL,
color_hexcode VARCHAR(20) NOT NULL,
# other stuff here
);
If the corresponding Color entity looked like this:
#Entity
class Color {
#Id
#Column(name = "color_id")
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long id
#Column(name = "color_name")
String name
#Column(name = "color_label")
String label
#Column(name = "color_hexcode")
String hexcode
// ...etc.
}
Then with only one model it would serialize into JSON like so:
{
"id" : 958,
"name" : "Red",
"label" : "RED",
"hexcode" : "ff0000"
}
But maybe I just want it to come back as a simple string value:
{
"color" : "RED"
}
So it seems to me that I either need two separate models (and mapper classes that map between them) or I need a way to annotate my entities or configure either Spring, Jackson or maybe even Hibernate to apply certain transformations on my entities at the right time. Do these frameworks offer anything that can help me here, or am I going to have to go with two distinct domain models here?
You can actually accomplish this with just one model and I think it is the easiest way if you are just looking for hiding fields, custom formatting, simple transformation of attributes etc. Having two models require transformation from one model to another and vice-versa which is a pain. Jackson provides a lot of useful annotations which can be used to customize the output. Some of the annotations that can be useful for you are listed below
#JsonIgnore - ignore a field/attribute. You can hide your id field using this annotation.
#JsonInclude - Can be used to specify when a field should be present in output. For eg: Whether a field should be present in output if it is null
#JsonSerialize - You can specify a custom serializer for an attribute. For eg: You have an attribute 'password' and you want to output password as '****'.
#JsonFormat - You can apply a custom format to a field. This is very useful if you have date/time fields
#JsonProperty - If you want to give a different name for your field in your output. For eg: You have a field 'name' in your model and you want to display it as 'userName' in the output.

PostgreSQL Auto-increment inside a JSON

Is it possible to auto-increment inside PostgreSQL's new JSON type using just SQL (like serial) and not server code?
I can't really imagine why you'd want to, but sure.
CREATE SEQUENCE whywouldyou_jsoncol_seq;
CREATE TABLE whywouldyou (
jsoncol json not null default json_object(ARRAY['id'], ARRAY[nextval('whywouldyou_jsoncol_seq')::text]),
dummydata text;
);
ALTER SEQUENCE whywouldyou_jsoncol_seq OWNED BY whywouldyou.jsoncol;
insert into whywouldyou(dummydata) values('');
select * from whywouldyou;
jsoncol | dummydata
--------------+-----------
{"id" : "1"} |
(1 row)
Note that with this particular formulation it's the string "1" not the number 1 in the json. You might want to form the json object another way if you want to avoid that. This is just an example.