How to setup username and password with Slick's source code generator? - mysql

Following the directions in this page: http://slick.typesafe.com/doc/2.0.0/code-generation.html
we see that something like the following segment of code is required to generate models for mysql tables
val url = "jdbc:mysql://127.0.0.1/SOME_DB_SCHEMA?characterEncoding=UTF-8&useUnicode=true"
val slickDriver = "scala.slick.driver.MySQLDriver"
val jdbcDriver = "com.mysql.jdbc.Driver"
val outputFolder = "/some/path"
val pkg = "com.pligor.server"
scala.slick.model.codegen.SourceCodeGenerator.main(
Array(slickDriver, jdbcDriver, url, outputFolder, pkg)
)
These parameteres are enough for an H2 database as the example in the link has it.
How to include username and password for the MySQL database?

From several links found in the internet and also based on the cvogt's answer this is the minimum that you need to do.
Note that this is a general solution for sbt. If you are dealing with play framework you might find it easier to perform this task with the relevant plugin
First of all you need a new sbt project because of all the library dependencies that are needed to be referenced in order for slick source generator to run.
Create the new sbt project using this tutorial: http://scalatutorials.com/beginner/2013/07/18/getting-started-with-sbt/
Preferably use the method Setup using giter8
If it happens to work with Intellij then you need to create file project/plugins.sbt and insert inside this line: addSbtPlugin("com.hanhuy.sbt" % "sbt-idea" % "1.6.0").
Execute gen-idea in sbt to generate an intellij project.
With giter8 you get an auto-generated file ProjectNameBuild.scala inside project folder. Open this and include at least these library dependencies:
libraryDependencies ++= List(
"mysql" % "mysql-connector-java" % "5.1.27",
"com.typesafe.slick" %% "slick" % "2.0.0",
"org.slf4j" % "slf4j-nop" % "1.6.4",
"org.scala-lang" % "scala-reflect" % scala_version
)
where scala version is the variable private val scala_version = "2.10.3"
Now create the custom source code generator that looks like that:
import scala.slick.model.codegen.SourceCodeGenerator
object CustomSourceCodeGenerator {
import scala.slick.driver.JdbcProfile
import scala.reflect.runtime.currentMirror
def execute(url: String,
jdbcDriver: String,
user: String,
password: String,
slickDriver: String,
outputFolder: String,
pkg: String) = {
val driver: JdbcProfile = currentMirror.reflectModule(
currentMirror.staticModule(slickDriver)
).instance.asInstanceOf[JdbcProfile]
driver.simple.Database.forURL(
url,
driver = jdbcDriver,
user = user,
password = password
).withSession {
implicit session =>
new SourceCodeGenerator(driver.createModel).writeToFile(slickDriver, outputFolder, pkg)
}
}
}
Finally you need to call this execute method inside main project object. Find the file ProjectName.scala that was auto-generated by giter8.
Inside it you will find a println call since this is merely a "hello world" application. Above println call something like that:
CustomSourceCodeGenerator.execute(
url = "jdbc:mysql://127.0.0.1/SOME_DB_SCHEMA?characterEncoding=UTF-8&useUnicode=true",
slickDriver = "scala.slick.driver.MySQLDriver",
jdbcDriver = "com.mysql.jdbc.Driver",
outputFolder = "/some/path",
pkg = "com.pligor.server",
user = "root",
password = "xxxxxyourpasswordxxxxx"
)
This way every time you execute sbt run you are going to generate the Table classes required by Slick automatically

Note that at least for 2.0.1 this is fixed. Just add username and password to the end of the Array as Strings

This has been asked and answered here: https://groups.google.com/forum/#!msg/scalaquery/UcS4_wyrJq0/obLHheIWIXEJ . Currently you need to customize the code generator. A PR for 2.0.1 is in the queue.

My solution is nearly the same as George's answer, but I'll add mine anyway. This is the entire file I use to generate code for my mysql database in an SBT project.
SlickAutoGen.scala
package mypackage
import slick.model.codegen.SourceCodeGenerator
object CodeGen {
def main(args: Array[String]) {
SourceCodeGenerator.main(
Array(
"scala.slick.driver.MySQLDriver",
"com.mysql.jdbc.Driver",
"jdbc:mysql://localhost:3306/mydb",
"src/main/scala/",
"mypackage",
"root",
"" // I don't use a password on localhost
)
)
}
}
build.sbt
// build.sbt --- Scala build tool settings
libraryDependencies ++= List(
"com.typesafe.slick" %% "slick" % "2.0.1",
"mysql" % "mysql-connector-java" % "5.1.24",
...
)
To use this, just modify settings, save in project root directory and run as follows:
$ sbt
> runMain mypackage.CodeGen

Related

how to import variables from a json file to attributes in BUILD.bazel?

I would like to import variables defined in a json file(my_info.json) as attibutes for bazel rules.
I tried this (https://docs.bazel.build/versions/5.3.1/skylark/tutorial-sharing-variables.html) and works but do not want to use a .bzl file and import variables directly to attributes to BUILD.bazel.
I want to use those variables imported from my_info.json as attributes for other BUILD.bazel files.
projects/python_web/BUILD.bazel
load("//projects/tools/parser:config.bzl", "MY_REPO","MY_IMAGE")
container_push(
name = "publish",
format = "Docker",
registry = "registry.hub.docker.com",
repository = MY_REPO,
image = MY_IMAGE,
tag = "1",
)
Asking the similar in Bazel slack I was informed the is not possible to import variables directly to Bazel and it is needed to parse the json variables and write them into a .bzl file.
I tried also this code but nothing is written in config.bzl file.
my_info.json
{
"MYREPO" : "registry.hub.docker.com",
"MYIMAGE" : "michael/monorepo-python-web"
}
WORKSPACE.bazel
load("//projects/tools/parser:jsonparser.bzl", "load_my_json")
load_my_json(
name = "myjson"
)
projects/tools/parser/jsonparser.bzl
def _load_my_json_impl(repository_ctx):
json_data = json.decode(repository_ctx.read(repository_ctx.path(Label(":my_info.json"))))
config_lines = ["%s = %s" % (key, repr(val)) for key, val in json_data.items()]
repository_ctx.file("config.bzl", "\n".join(config_lines))
load_my_json = repository_rule(
implementation = _load_my_json_impl,
attrs = {},
)
projects/tools/parser/BUILD.bazel
load("#aspect_bazel_lib//lib:yq.bzl", "yq")
load(":config.bzl", "MYREPO", "MY_IMAGE")
yq(
name = "convert",
srcs = ["my_info2.json"],
args = ["-P"],
outs = ["bar.yaml"],
)
Executing:
% bazel build projects/tools/parser:convert
ERROR: Traceback (most recent call last):
File "/Users/michael.taquia/Documents/Personal/Projects/bazel/bazel-projects/multi-language-bazel-monorepo/projects/tools/parser/BUILD.bazel", line 2, column 22, in <toplevel>
load(":config.bzl", "MYREPO", "MY_IMAGE")
Error: file ':config.bzl' does not contain symbol 'MYREPO'
When making troubleshooting I see the execution calls the jsonparser.bzl but never enters to _load_my_json_impl function (based in print statements) and does not write anything to config.bzl.
Notes: Tested on macOS 12.6 (21G115 ) Darwin Kernel Version 21.6.0
There is a better way to do that? A code snippet will be very useful.

Can't read JSON file in Ruby on Rails

I am new in ruby on rails and I want to read data from a JSON file from a specified directory, but I constantly get an error in chap3(File name)
Errno::ENOENT in TopController#chap3. No such file or directory # rb_sysopen - links.json.
In the console, I get a message
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
How I can fix that?
Code:
require "json"
class TopController < ApplicationController
def index
#message = "おはようございます!"
end
def chap3
data = File.read('links.json')
datahash = JSON.parse(data)
puts datahash.keys
end
def getName
render plain: "名前は、#{params[:name]}"
end
def database
#members = Member.all
end
end
JSON file:
{ "data": [
{"link1": "http://localhost:3000/chap3/a.html"},
{"link2": "http://localhost:3000/chap3/b.html"},
{"link3": "http://localhost:3000/chap3/c.html"},
{"link4": "http://localhost:3000/chap3/d.html"},
{"link5": "http://localhost:3000/chap3/e.html"},
{"link6": "http://localhost:3000/chap3/f.html"},
{"link7": "http://localhost:3000/chap3/g.html"}]}
I would change these two lines
data = File.read('links.json')
datahash = JSON.parse(data)
in the controller to
datahash = Rails.root.join('app/controllers/links.json').read
Note: I would consider moving this kind of configuration file into the /config folder and creating a simple Ruby class to handle it. Additionally, you might want to consider paths instead of URLs with a host because localhost:3000 might work in the development environment but in production, you will need to return non-localhost URLs anyway.
Rails use the content of file in the controller
#data = File.read("#{Rails.root}/app/controllers/links.json")

Play framework: Unable to Inject Database object

I am trying to connect to mysql using play framework. I am new to play and unable to figure out the exact problem. Any help will be highly appreciated.
The configuration in conf\application.conf is as follows:
config = "db"
default = "default"
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://localhost/ng_play"
db.default.username=root
db.default.password="****"
ebean.default = ["models.*"]
build.sbt
name := """play-scala-tutorial-one"""
version := "1.0-SNAPSHOT"
lazy val root = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
jdbc,
cache,
ws,
"mysql" % "mysql-connector-java" % "5.1.36",
"org.scalatestplus.play" %% "scalatestplus-play" % "1.5.1" % Test
)
resolvers += "scalaz-bintray" at "http://dl.bintray.com/scalaz/releases"
Mysql version and database-connector version was mismatch. and also adding db.default.hikaricp.connectionTestQuery="SELECT TRUE" this to application.conf helped to mitigate one issue.
Thanks #silentprogrammer and #salem for help.

Erlang - undefined function

I'm trying to execute a very simple Erlang code, and it's not working.
I've tryied executed some hello worlds without problem, but not mine own code.
-module(server).
%% Exported Functions
-export([start/0, process_requests/1]).
%% API Functions
start() ->
ServerPid = spawn(server, process_requests, [[]]),
register(myserver, ServerPid).
process_requests(Clients) ->
receive
{client_join_req, Name, From} ->
NewClients = [From|Clients], %% TODO: COMPLETE
broadcast(NewClients, {join, Name}),
process_requests(NewClients); %% TODO: COMPLETE
{client_leave_req, Name, From} ->
NewClients = lists:delete(From, Clients), %% TODO: COMPLETE
broadcast(Clients, {leave, Name}), %% TODO: COMPLETE
process_requests(NewClients); %% TODO: COMPLETE
{send, Name, Text} ->
broadcast(Clients, {message, Name, Text}), %% TODO: COMPLETE
process_requests(Clients)
end.
%% Local Functions
broadcast(PeerList, Message) ->
Fun = fun(Peer) -> Peer ! Message end,
lists:map(Fun, PeerList).
Compile result:
5> c(server).
{ok,server}
6> server:start().
** exception error: undefined function server:start/0
You compile you code with c/1, but you forgot to load it to VM with l/1. While VM does loads modules new automatically (modules not yet loaded to VM), it doesn't reload them each time you compile to new beam.
If you do it a lot in development you might want to look into tools like sync.
Try to check with pwd(). whether you are in the directory where your listed server code is. Seems to be a path issue. It also can happen that in your code:get_path() there is a directory where another server.beam is sitting that has not got a start function.

Is there a tool to check database integrity in Django?

The MySQL database powering our Django site has developed some integrity problems; e.g. foreign keys that refer to nonexistent rows. I won't go into how we got into this mess, but I'm now looking at how to fix it.
Basically, I'm looking for a script that scans all models in the Django site, and checks whether all foreign keys and other constraints are correct. Hopefully, the number of problems will be small enough so they can be fixed by hand.
I could code this up myself but I'm hoping that somebody here has a better idea.
I found django-check-constraints but it doesn't quite fit the bill: right now, I don't need something to prevent these problems, but to find them so they can be fixed manually before taking other steps.
Other constraints:
Django 1.1.1 and upgrading has been determined to break things
MySQL 5.0.51 (Debian Lenny), currently with MyISAM tables
Python 2.5, might be upgradable but I'd rather not right now
(Later, we will convert to InnoDB for proper transaction support, and maybe foreign key constraints on the database level, to prevent similar problems in the future. But that's not the topic of this question.)
I whipped up something myself. The management script below should be saved in myapp/management/commands/checkdb.py. Make sure that intermediate directories have an __init__.py file.
Usage: ./manage.py checkdb for a full check; use --exclude app.Model or -e app.Model to exclude the model Model in the app app.
from django.core.management.base import BaseCommand, CommandError
from django.core.management.base import NoArgsCommand
from django.core.exceptions import ObjectDoesNotExist
from django.db import models
from optparse import make_option
from lib.progress import with_progress_meter
def model_name(model):
return '%s.%s' % (model._meta.app_label, model._meta.object_name)
class Command(BaseCommand):
args = '[-e|--exclude app_name.ModelName]'
help = 'Checks constraints in the database and reports violations on stdout'
option_list = NoArgsCommand.option_list + (
make_option('-e', '--exclude', action='append', type='string', dest='exclude'),
)
def handle(self, *args, **options):
# TODO once we're on Django 1.2, write to self.stdout and self.stderr instead of plain print
exclude = options.get('exclude', None) or []
failed_instance_count = 0
failed_model_count = 0
for app in models.get_apps():
for model in models.get_models(app):
if model_name(model) in exclude:
print 'Skipping model %s' % model_name(model)
continue
fail_count = self.check_model(app, model)
if fail_count > 0:
failed_model_count += 1
failed_instance_count += fail_count
print 'Detected %d errors in %d models' % (failed_instance_count, failed_model_count)
def check_model(self, app, model):
meta = model._meta
if meta.proxy:
print 'WARNING: proxy models not currently supported; ignored'
return
# Define all the checks we can do; they return True if they are ok,
# False if not (and print a message to stdout)
def check_foreign_key(model, field):
foreign_model = field.related.parent_model
def check_instance(instance):
try:
# name: name of the attribute containing the model instance (e.g. 'user')
# attname: name of the attribute containing the id (e.g. 'user_id')
getattr(instance, field.name)
return True
except ObjectDoesNotExist:
print '%s with pk %s refers via field %s to nonexistent %s with pk %s' % \
(model_name(model), str(instance.pk), field.name, model_name(foreign_model), getattr(instance, field.attname))
return check_instance
# Make a list of checks to run on each model instance
checks = []
for field in meta.local_fields + meta.local_many_to_many + meta.virtual_fields:
if isinstance(field, models.ForeignKey):
checks.append(check_foreign_key(model, field))
# Run all checks
fail_count = 0
if checks:
for instance in with_progress_meter(model.objects.all(), model.objects.count(), 'Checking model %s ...' % model_name(model)):
for check in checks:
if not check(instance):
fail_count += 1
return fail_count
I'm making this a community wiki because I welcome any and all improvements to my code!
Thomas' answer is great but is now a bit out of date.
I have updated it as a gist to support Django 1.8+.