How to generate MySQL table from Go struct - mysql

I'm creating a CRUD program using Go and I have quite a big struct with over 70 fields that I want to add to a MySQL database.
I was wondering if there's a way to automatically map the struct into my database so I wouldn't have to create the table manually and it would just copy my struct?

I haven't found a way to totally automate that process, but atleast you can create them using tags and only a little bit of code.
Workarround example:
There are some github projects in the wild, which help you to achieve this.
For example structable
You'd have to add tags to your structs members.
Example from github:
type Stool struct {
Id int `stbl:"id, PRIMARY_KEY, AUTO_INCREMENT"`
Legs int `stbl:"number_of_legs"`
Material string `stbl:"material"`
Ignored string // will not be stored. No tag.
}
When you have that part, you can create the table like in the following example (also from the github page)
stool := new(Stool)
stool.Material = "Wood"
db := getDb() // Get a sql.Db. You're on the hook to do this part.
// Create a new structable.Recorder and tell it to
// bind the given struct as a row in the given table.
r := structable.New(db, "mysql").Bind("test_table", stool)
// This will insert the stool into the test_table.
err := r.Insert()

try gen-model
go get -u github.com/DaoYoung/gen-model
gen-model init # then set value in .gen-model.yaml
gen-model create
done

Related

PLSQL- Walking thru JSON structure without knowing element names

Used Database:
Im using Oracle 19c database, so i tried to use JSON functions declared already in PLSQL (for instance JSON_TABLE) to import JSON inside database table.
What im doing:
Im just calling API, getting JSON from it, and then i would like to import data inside the database, regardless of what data, and in what structure they came.
Problem:
I would like to iterate JSON data without knowing element names inside that JSON.
I would like to know where im actually am (name of current node), and names of child elements, so i could dynamically create tables from those names, add relations between them, and import all data.
What i have tried:
So far i was doing it manually- i had to create tables by myself. Importing data required knowledge of object names, and also knowledge of JSON structure that i want to import. And its working, but oh well... i would like to create something more universal. All this stuff had to be done, because i dont know any way to walk thru structure of JSON without knowing names of objects and generally- entire JSON structure.
Any ideas how to walk thru json structure, without knowing object names and relations between them?
Learn the new PL/SQL JSON data structures JSON Data Structures
procedure parse_json(p_json in blob) is
l_elem json_element_t := json_element_t.parse(p_json);
l_obj json_object_t;
l_arr json_array_t;
l_keys json_key_list;
begin
case
when l_elem.is_Object then
l_obj := treat(l_elem as json_object_t);
l_keys := l_obj.get_Keys;
for i in 1..l_keys.count loop
//work with the keys
if l_obj.get(l_keys(i)).is_object then
// this key is object,
end if;
if l_obj.get(l_keys(i)).is_array then
// this key is array,
end if;
end loop;
when l_elem.is_Array then
l_arr := treat(l_elem as json_array_t);
for i in 0..l_arr.get_size - 1 loop
// work with array
case l_arr..get_type(i)
when 'SCALAR' then
if l_arr.get(i).is_string then
if l_arr.get(i).is_number then
if l_arr.get(i).is_timestamp then
if l_arr.get(i).is_boolean then
.....
when 'OBJECT' then
....
when 'ARRAY' then
....
end case;
end loop;
end case;
end parse_json;
You can also use the truly helpful JSON Data Guide and the DBMS_JSON package to map out the json object for you and even automatically create a view using JSON_TABLE.
Regards

How to fetch data from API in Oracle APEX without web source module

I'm just new with APEX, PL/SQL and API/JSON so please bear with me.
I need to create a search page where the data will be coming from the API.
I tried to do it with web source but unfortunately I'm having an error, checked already with the dba team, etc. the error still there, thinking its about the version issue or something, so i remove this idea, though this will really help me a lot.
So the workaround is that the PL/SQL will connect to the API.
So it goes like this:
In APEX, I will input some data on the textbox and when I click the search button it will fetch the data from API to the interactive report.
**UPDATED
This is what I have and I believe there's a conversion of JSON thing that I also need to do.
declare
v_url varchar2(1000);
v_wallet_path varchar2(120) :='<walletvalue>';
v_body clob := '{<json body>}';
l_response clob;
begin
apex_web_service.g_request_headers.delete;
apex_web_service.g_request_headers(1).name := 'Ocp-Apim-Subscription-Key';
apex_web_service.g_request_headers(1).value := '<key value>';
v_url := '<url>';
l_response := apex_web_service.make_rest_request(
p_url => v_url,
p_http_method => 'POST',
p_wallet_path => v_wallet_path,
p_wallet_pwd =>'<password>',
p_body => v_body);
if apex_web_service.g_status_code = 200 then --OK
--dbms_output.put_line(l_response);
else --ERROR?
dbms_output.put_line('ERROR');
End If;
End;
Can someone please help me, I've been thinking about this for weeks. I don’t know where to start. What are the things I need to have, to know and the steps on how to create the page.
I know this is a lot but I will really appreciate your help! Thanks in advance also!
This is a very broad question, so my answer is also pretty vague.
I don't think you want to create a function - before the Web Source module was introduced, this kind of thing was often done in an on-submit page process. In your process you'd need to:
Call the web API, pass in your search term, and get back a response. The old way to do this was with UTL_HTTP, but the newer APEX_WEB_SERVICE package made it much easier.
Using APEX_COLLECTION, create/truncate a collection and save the response clob into the collection's p_clob001 field.
Edit: here's a code snippet for that
l_clob := apex_web_service.make_rest_request(....);
APEX_COLLECTION.CREATE_OR_TRUNCATE_COLLECTION(p_collection_name => 'API_RESPONSE');
APEX_COLLECTION.ADD_MEMBER(
p_collection_name => 'API_RESPONSE'
p_clob001 => l_clob);
Then create an interactive report. The source will be a SQL query which will take the collection's clob, parse it as JSON, and convert into a tabular format (rows and columns) using JSON_TABLE.
Edit: add example
SELECT jt.id, jt.name
FROM APEX_collections c
cross join JSON_TABLE(
clob001, -- the following lines depend on your JSON structure
'$[*]',
columns(
id number path '$.id',
name varchar2(10) path '$.name')
) jt
WHERE collection_name = 'API_RESPONSE'
Alternately, you could parse the clob using JSON_TABLE as part of your page process, save the output table into a collection using APEX_COLLECTION.CREATE_COLLECTION_FROM_QUERY, and then just query that collection for your interactive report.
Edit: I'm not sure if this would work, but something like:
APEX_COLLECTION.CREATE_COLLECTION_FROM_QUERY (
p_collection_name => 'API_RESPONSE',
p_query => 'SELECT t.id, t.name
FROM JSON_TABLE(
l_clob,
''$[*]'',
columns(
id number path ''$.id'',
name varchar2(10) path ''$.name'')
) t');
Side note: as a very different option, you could also call the web service using JavaScript/jQuery/AJAX. I think this would be more complicated, but it's technically possible.

How to create table without schema in BigQuery by API?

Simply speaking I would create table with given name providing only data.
I have some JUnit's with sample data (jsons)
I have to provide schema for above files to create tables for them
I suppose that don't need provide above schemas.
Why? Because in BigQuery console I can create table from query (even such simple like: select 1, 'test') or I can upload json to create table with schema autodetection => probably could also do it programatically
I saw https://chartio.com/resources/tutorials/how-to-create-a-table-from-a-query-in-google-bigquery/#using-the-api and know that could parse jsons with data to queries and use Jobs.insert API to run them but it's over engineered and has some other disadvanteges e.g. boilerplate code.
After some research I found possibly simpler way of creating table on fly, but it doesn't work for me, code below:
Insert insert = bigquery.jobs().insert(projectId,
new Job().setConfiguration(
new JobConfiguration().setLoad(
new JobConfigurationLoad()
.setSourceFormat("NEWLINE_DELIMITED_JSON")
.setDestinationTable(
new TableReference()
.setProjectId(projectId)
.setDatasetId(dataSetId)
.setTableId(tableId)
)
.setCreateDisposition("CREATE_IF_NEEDED")
.setWriteDisposition(writeDisposition)
.setSourceUris(Collections.singletonList(sourceUri))
.setAutodetect(true)
)
));
Job myInsertJob = insert.execute();
JSON file which is used as a source data is pointed by sourceUri, looks like:
[
{
"stringField1": "value1",
"numberField2": "123456789"
}
]
Even if I used setCreateDisposition("CREATE_IF_NEEDED") I still receive error: "Not found: Table ..."
Is there any other method in API or better approach than above to exclude schema?
The code in your question is perfectly fine, and it does create table if it doesn't exist. However, it fails when you use partition id in place of table id, i.e. when destination table id is "table$20170323" which is what you used in your job. In order to write to partition, you will have to create table first.

Inserting different parts of a record at different times when writing to database

My problem is that I'm writing data to separate fields in the same record at different times, and when I add data to the second set up fields in that same record, I get an error saying :
You must enter a value in the 'tblWorkoutDetails.Username' field
Despite having entered that data in the previous statement.
The table this refers to is called tblWorkoutDetails and has the following fields:
WorkoutID
Username
WorkoutDate
Weight
Waist
WorkoutID and Username make up a composite key. This table is relational to another table, which I'm also writing data to in the same way.
I'm writing the first part of my record (the fields making up my composite key) to the database using this code :
adotblWorkoutDetails['Username'] := Username;
adotblWorkoutDetails['WorkoutID'] := CurrentWorkout+1;
adotblWorkoutDetails.Post;
adotblWorkoutDetails.Refresh;
The values being assigned to the fields are simply variables and this executes perfectly.
The second statement which fills out the remainder of the fields in that record is as follows:
adotblWorkoutDetails['WorkoutDate'] := Date;
adotblWorkoutDetails['Weight'] := Weight;
adotblWorkoutDetails['Waist'] := Waist;
adotblWorkoutDetails.Post;
adotblWorkoutDetails.Refresh;
The program breaks with the aforementioned error at adotblWorkoutDetails.Post. While trying to fix this, I've attempted reassigning the required fields, however I got an error saying I was entering duplicate data. In addition, when I fill out every field in the record within the first go (using sample data to fill out the fields 'reserved' for the second data entry), the code executes perfectly, and so does the code writing to the other table.
I can't work out how to resolve this. If you need more information / screenshots of the code please ask.
This code gives me the 'you must enter a value in the field...' error:
And uncommenting these two lines gives me the duplicate data error:
I suspect what's happening is that your second operation is mistakenly doing another insert rather than an edit, and that the exception you're getting is the result of the UserName field having it's Required property set to True.
This works the way you're describing:
adotblWorkoutDetails.Insert;
adotblWorkoutDetails['Username'] := Username;
adotblWorkoutDetails['WorkoutID'] := CurrentWorkout+1;
adotblWorkoutDetails.Post;
//adotblWorkoutDetails.Refresh; // Refresh is not required
adotblWorkoutDetails.Insert;
adotblWorkoutDetails['WorkoutDate'] := Date;
adotblWorkoutDetails['Weight'] := Weight;
adotblWorkoutDetails['Waist'] := Waist;
adotblWorkoutDetails.Post;
// adotblWorkoutDetails.Refresh; // Again, not required
What you should be doing instead:
adotblWorkoutDetails.Insert;
adotblWorkoutDetails['Username'] := Username;
adotblWorkoutDetails['WorkoutID'] := CurrentWorkout+1;
adotblWorkoutDetails.Post;
// Locate the correct record using Locate or FindKey first, then
adotblWorkoutDetails.Edit;
adotblWorkoutDetails['WorkoutDate'] := Date;
adotblWorkoutDetails['Weight'] := Weight;
adotblWorkoutDetails['Waist'] := Waist;
adotblWorkoutDetails.Post;
(As an aside for future reference, never post any of your code as an image; if you need to post code, copy and paste the code itself. Here's a list of reasons why you shouldn't.

How to covert a mySql DB into Drupal format tables

Hi there i have some sql tables and i want to convert these in a "Drupal Node Format" but i don't know how to do it. Does someone knows at least which tables i have to write in order to have a full node with all the keys etc. ?
I will give an example :
I have theses Objects :
Anime
field animeID
field animeName
Producer
field producerID
field producerName
AnimeProducers
field animeID
field producerID
I have used the CCK module and i had created in my drupal a new Content Type Anime and a new Data Type Producer that exist in an Anime object.
How can i insert all the data from my simple mysql db into drupal ?
Sorry for the long post , i would like to give you the chance to understand my problem
Thx in advance for your time to read my post
You can use either the Feeds module to import flat CSV files, or there is a module called Migrate that seems promising (albiet pretty intense). Both work on Drupal 6 or 7.
mmmmm.... i think you can export CVS from your sql database and then use
http://drupal.org/project/node_import
to import this cvs data to nodes.....mmmm i don know if there is another non-programmatically way
The main tables for node property data are node and node_revision, have a look at the columns in those and it should be fairly obvious what needs to go in those.
As far as fields go, their storage is predictable so you would be able automate an import (although I don't envy you having to write that!). If your field is called 'field_anime' it's data will live in two tables: field_data_field_anime and field_revision_field_anime which are keyed by the entity ID (in this case node ID), entity type (in the case 'node' itself) and bundle (in this case the name of your node type). You should keep both tables up to date to ensure the revision system functions correctly.
The simplest way to do it though is with PHP and the node API functions:
/* This is for a single node, obviously you'd want to loop through your custom SQL data here */
$node = new stdClass;
$node->type = 'my_type';
$node->title = 'Title';
node_object_prepare($node);
// Fields
$node->field_anime[LANGUAGE_NONE] = array(0 => array('value' => $value_for_field));
$node->field_producer[LANGUAGE_NONE] = array(0 => array('value' => $value_for_field));
// And so on...
// Finally save the node
node_save($node);
If you use this method Drupal will handle a lot of the messy stuff for you (for example updating the taxonomy_index table automatically when adding a taxonomy term field to a node)