Rails - MySql app - Reporting and Charting requirements - mysql

I have a Rails app with MySql DB. It houses a lot of data on which we need to run some reports and create some dashboards. It requires some calculations. What are some ready to use solutions for this? Gems, tools or frameworks that can help getting it developed fast would help.

For charts you can use Fusion chart gem in rails
https://github.com/fusioncharts/rails-wrapper
In order to draw charts you need data in hash Like
Create one lib which will return hash data
example
class Dashboard
def initialize
# initialize here
end
def data_set
{
user_details: User.details,
profile_details: Profile.details
}
end
end
In particular model you can fetch data using queries.
For charts, you have to render charts as per documentation

Related

How to read a table from database at pyramid application startup?

I have a pyramid game server app that uses sqlalchemy to read/write to postgres database. I want to read a certain table(call it games) from the database at the time this app is created. This games data will be used by one of my wsgi middleware, that is hooked in the app, to send statsd metrics. To do this, I added a subscriber in the main function of my app like:
config.add_subscriber(init_mw_data, ApplicationCreated)
Now, I want to read the games table in the following function:
def init_mw_data(event):
...
...
Anybody know how I can read games table inside the function init_mw_data ?
It based how you configured your application.
Default sqlalchemy template from pyramid-cookiecutter-starter have dbsession_factory.
So, you can do something like this:
def init_mw_data(event):
registry = event.app.registry
dbsession = registry['dbsession_factory']()
dbsession.query(...)
...

Connecting Database with Svelte

I'm new to using Svelte and would like to create a ordering website using Svelte. I know that I will need a database to keep track of the order, customer name, price etc. I have used MySQL before but I haven't learned how to connect a database to a website.
Is there a specific database that you can use if you are using Svelte?
Or is there a way to connect MySQL to Svelte?
I have searched about this on Youtube and Google but I'm not sure if it's different if you are using Svelte so I wanted to make sure.
Note: I have not started this project yet so I do not have any code to show I just want to know how you can connect a database if you're using Svelte.
Svelte is a front end javascript framework that run on the browser.
Traditionally, in order to use databases like mysql from a front end project such as svelte, (that contains only html,css and js), you would have to do it with a separate backend project. You can then communicate the svelte app and the backend project with the help of REST api. The same applies to other other front end libraries/frameworks like react, angular vue etc.
There are still so many ways to achieve the result. Since you are focusing on Svelte here are few things options
1 Sapper
Sapper is an application framework powered by svelte. You can also write backend code using express or polka so that you can connect to database of your choice (mysql / mongodb)
2 User Server less database
If you want you app simple and just focus on svelte app, you can use cloud based databases such as firebase. Svelte can directly talk to them via their javascript SDK.
3 monolithic architecture
To connect with mysql in the backend, you would need to use one serverside application programming language such as nodejs (express) php or python or whatever you are familiar with. Then use can embed svelte app or use api to pass data to the svelte app.
I can make an example with mongodb
You have to install the library
npm install mongodb
or add in package.json
Then you have to make a connection file that you have to call everytime you need to use the db
const mongo = require("mongodb");
let client = null;
let db = null;
export async function init() {
if(!client) {
client = await mongo.MongoClient.connect("mongodb://localhost");
db = client.db("name-of-your-db");
}
return { client, db }
}
for a complete example with insert you can see this video
https://www.youtube.com/watch?v=Mey2KZDog_A
You can use pouchdb, which gives you direct access to the indexedDB in the browser. No backend needed for this.
The client-pouchdb can then be replicated/synced with a remote couchdb. This can all be done inside you svelte-app from the client-side.
It is pretty easy to setup.
var db = new PouchDB('dbname');
db.put({
_id: 'dave#gmail.com',
name: 'David',
age: 69
});
db.changes().on('change', function() {
console.log('Ch-Ch-Changes');
});
db.replicate.to('http://example.com/mydb');
more on pouchdb.com
Also the client can save the data offline first and later connect to a remote database.
As i get question mostly about connection to backend, not a database. It is pity, but svelte app template has no way to connect backend "in box".
What about me, i'm using express middleware in front of rollup server. In this case you able to proxy some requests to backend server. Check code below
const proxy = require('express-http-proxy');
const app = require('express')();
app.use('/data/', proxy(
'http://backend/data',
{
proxyReqPathResolver: req => {
return '/data'+ req.url;
}
}
)
);
app.use('/', proxy('http://127.0.0.1:5000'));
app.listen(5001);
This script opend 5001 port where you have /data/ url proxied to backend server. And 5000 port still available from rollup server. So at http://localhost:5001/ you have svelte intance, connected to backend vi /data/ url, here you can send requests for fetching some data from database.

How do I create a lot of sample data for firestore?

Let's say I need to create a lot of different documents/collections in firestore. I need to add it quickly, like copy and paste json. I can't do that with standard firebase console, because adding 100 documents will take me forever. Is there any solutions for to bulk create mock data with a given structure in firestore db?
If you switch to the Cloud Console (rather than Firebase Console) for your project, you can use Cloud Shell as a starting point.
From the Cloud Shell environment you'll find tools like node and python installed and available. Using whatever one you prefer, you can write a script using the Server Client libraries.
For example in Python:
from google.cloud import firestore
import random
MAX_DOCUMENTS = 100
SAMPLE_COLLECTION_ID = u'users'
SAMPLE_COLORS = [u'Blue', u'Red', u'Green', u'Yellow', u'White', u'Black']
# Project ID is determined by the GCLOUD_PROJECT environment variable
db = firestore.Client()
collection_ref = db.collection(SAMPLE_COLLECTION_ID)
for x in range(0, MAX_DOCUMENTS - 1):
collection_ref.add({
u'primary': random.choice(SAMPLE_COLORS),
u'secondary': random.choice(SAMPLE_COLORS),
u'trim': random.choice(SAMPLE_COLORS),
u'accent': random.choice(SAMPLE_COLORS)
})
While this is the easiest way to get up and running with a static dataset, it lives a little to be desired. Namely with Firestore, live dynamic data is needed to exercises it's functionally, such as real-time queries. For this task, using Cloud Scheduler & Cloud Functions is a relatively easy way to regularly updating sample data.
In addition to the sample generation code, you'll specific the update frequency in the Cloud Scheduler. For instance in the image below, */10 * * * * defines a frequency of every 10 minutes using the standard unix-cron format:
For non-static data, often a timestamp is useful. Firestore provides a way to have a timestamp from the database server added at write-time as one of the fields:
u'timestamp': firestore.SERVER_TIMESTAMP
It is worth noting that timestamps like this will hotspot in production systems if not sharded correctly. Typically 500 writes/second to the same collection is the maximum you will want so that the index doesn't hotspot. Sharding can be as simple something like as each user having their own collection (500 writes/second per user). However for this example, writing 100 documents every minute via a scheduled Cloud Function is definitely not an issue.
FireKit is a good resource to use for this purpose. It even allows sub-collections.
https://retroportalstudio.gumroad.com/l/firekit_free

Django External Database Read Only , Display on Tables

I am trying to start a simple django app. I have been on it for days. I was able to this in flask in a few hrs.
I need advice on connecting to an external database to grab tables and display them on django pages.
This is my code in flask
#app.route("/topgroups")
def topgroups():
con = sql.connect("C:\\Users\\win10\\YandexDisk\\apps\\flask\\new_file.sqlite")
con.row_factory = sql.Row
cur = con.cursor()
cur.execute("SELECT domain, whois, Traffic, Groups,LE,adddate FROM do_1 where Groups in (75,86,66,58,67,57,68,85,48,56,76,77,46,65,47,64,45,55,74,54,44,33,34,43)")
rows = cur.fetchall();
return render_template("index.html", rows = rows)
I will give you the python answer but read until the end because you may be losing a lot on Django if you follow this approach.
Python comes with SQLite capabilities so you don't even need to install packages Python Docs:
Connect
import sqlite3
conn = sqlite3.connect('C:\\Users\\win10\\YandexDisk\\apps\\flask\\new_file.sqlite')
Want to ensure read only? from the docs
conn = sqlite3.connect('file:C:\\Users\\win10\\YandexDisk\\apps\\flask\\new_file.sqlite?mode=ro', uri=True)
Use
cur = conn.cursor()
... (just like for flask)
Note/ My recommendation
One of the biggest advantages of Django is:
Define your data models entirely in Python. You get a rich, dynamic database-access API for free — but you can still write SQL if needed.
And you'll loose a lot without it, from basic stuff like what you asked to even unit-testing capabilities.
Follow this tutorial to integrate your database: Integrating Django with a legacy database.
You may set manage=False and Django won't touch in those tables,
only create new ones to support the app.
If you just use that DB for some special purpose then give a look at Django Multiple databases

How to use the Google api-client python library for Google Logging

I've been using the Google apiclient library in python for various Google Cloud APIs - mostly for Google Compute - with great success.
I want to start using the library to create and control the Google Logging mechanism offered by the Google Cloud Platform.
However, this is a beta version, and I can't find any real documentation or example on how to use the logging API.
All I was able to find are high-level descriptions such as:
https://developers.google.com/apis-explorer/#p/logging/v1beta3/
Can anyone provide a simple example on how to use apiclient for logging purposes?
for example creating a new log entry...
Thanks for the help
Shahar
I found this page:
https://developers.google.com/api-client-library/python/guide/logging
Which states you can do the following to set the log level:
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
However it doesn't seem to have any impact on the output which is always INFO for me.
I also tried setting httplib2 to debuglevel 4:
import httplib2
httplib2.debuglevel = 4
Yet I don't see any HTTP headers in the log :/
I know this question is old, but it is getting some attention, so I guess it might be worth answering to it, in case someone else comes here.
Stackdriver Logging Client Libraries for Google Cloud Platform are not in beta anymore, as they hit General Availability some time ago. The link I shared contains the most relevant documentation for installing and using them.
After running the command pip install --upgrade google-cloud-logging, you will be able to authenticate with your GCP account, and use the Client Libraries.
Using them is as easy as importing the library with a command such as from google.cloud import logging, then instantiate a new client (which you can use by default, or even pass the Project ID and Credentials explicitly) and finally work with Logs as you want.
You may also want to visit the official library documentation, where you will find all the details of how to use the library, which methods and classes are available, and how to do most of the things, with lots of self-explanatory examples, and even comparisons between the different alternatives on how to interact with Stackdriver Logging.
As a small example, let me also share a snippet of how to retrieve the five most recent logs which have status more sever than "warning":
# Import the Google Cloud Python client library
from google.cloud import logging
from google.cloud.logging import DESCENDING
# Instantiate a client
logging_client = logging.Client(project = <PROJECT_ID>)
# Set the filter to apply to the logs, this one retrieves GAE logs from the default service with a severity higher than "warning"
FILTER = 'resource.type:gae_app and resource.labels.module_id:default and severity>=WARNING'
i = 0
# List the entries in DESCENDING order and applying the FILTER
for entry in logging_client.list_entries(order_by=DESCENDING, filter_=FILTER): # API call
print('{} - Severity: {}'.format(entry.timestamp, entry.severity))
if (i >= 5):
break
i += 1
Bear in mind that this is just a simple example, and that many things can be achieved using the Logging Client Library, so you should refer to the official documentation pages that I shared in order to get a more deep understanding of how everything works.
However it doesn't seem to have any impact on the output which is
always INFO for me.
add a logging handler, e.g.:
formatter = logging.Formatter('%(asctime)s %(process)d %(levelname)s: %(message)s')
consoleHandler = logging.StreamHandler()
consoleHandler.setLevel(logging.DEBUG)
consoleHandler.setFormatter(formatter)
logger.addHandler(consoleHandler)