Jira 7.+ database schema changes - mysql

Does anyone know if MySQL database schema changed from Jira version 6.x to 7.x?
I asked Atlassian support, but had no definitive answer. They proposed to install clean version of Jira 7 and compare tables with version 6.

Well, since no one replied, I've downloaded and installed Jira7 and compaired it with Jira6. Here are my observations (in my environment).
--Tables counts:
jira6 - 267
jira7 - 239
-- bunch of AO.* tables were removed (this is where your count could be different)
-- these tables were added
board
boardproject
deadletter
tempattachmentsmonitor
-- these tables have added/changed index
cwd_user
jiraaction
jiraissue
In all, mostly, what I saw is addition of CHARSET=utf8 COLLATE=utf8_bin in every "create table" statement

I have asked the same thing , and they dont have one available.

Database schemas are available in PDF format here:
https://developer.atlassian.com/jiradev/jira-platform/jira-architecture/database-schema
Atlassian says -
The PDFs below show the database schema for JIRA 6.1 EAP 3 (m03) and JIRA 5.1.2.
jira70_schema.pdf
JIRA61_db_schema.pdf
JIRA_512_DBSchema.pdf
The database schema is also described in WEB-INF/classes/entitydefs/entitymodel.xml in the JIRA web application. The entitymodel.xml file has an XML definition of all JIRA's database tables, table columns and their data type. Some of the relationships between tables also appear in the file.
Generating JIRA database schema information
To generate schema information for the JIRA database, e.g. the PDF above, follow the instructions below. You can generate schema information in pdf, txt and dot formats. Note, if you want to generate the schema in PDF format, you need to have Graphviz installed.
Download the attached plugin:
For JIRA 5: jira-schema-diagram-generator-plugin-1.0.jar
For JIRA 6: jira-schema-diagram-generator-plugin-1.0.1.jar
For JIRA 7: jira-schema-diagram-generator-plugin-1.1.0.jar
Install the plugin in your JIRA instance by following the instructions on Managing JIRA's Plugins.
Go to the JIRA administration console and navigate to System > Troubleshooting and Support > Generate Schema Diagram
(tick)Keyboard shortcut: g + g + start typing generate
Enter the tables/columns to omit from the generated schema information, if desired.
If you want to generate a pdf, enter the path to the Graphviz executable.
Click Generate Schema.
The 'Database Schema' page will be displayed with links to the schema file in txt, dot and pdf format.
(You could probably get the XML or txt file and compare it in a file compare program to get specific changes.)

Related

How to upload csv data that contains newline with dbt

I have a 3rd party generated CSV file that I wish to upload to Google BigQuery using dbt seed.
I manage to upload it manually to BigQuery, but I need to enable "Quoted newlines" which is off by default.
When I run dbt seed, I get the following error:
16:34:43 Runtime Error in seed clickup_task (data/clickup_task.csv)
16:34:43 Error while reading data, error message: CSV table references column position 31, but line starting at position:304 contains only 4 columns.
There are 32 columns in the CSV. The file contains column values with newlines. I guess that's where the dbt parser fails. I checked the dbt seed configuration options, but I haven't found anything relevant.
Any ideas?
As far as I know - the seed feature is very limited by what is built into dbt-core. So seeds is not the way that I go here. You can see the history of requests for the expansion of seed options here on the dbt-cre issues repo (including my own request for similar optionality #3990 ) but I have to see any real traction on this.
That said, what has worked very well for me is to store flat files within the gcp project in a gcs bucket and then utilize the dbt-external-tables package for very similar but much more robust file structuring. Managing this can be a lot of overhead I know but becomes very very worth it if your seed files continue expanding in a way that can take advantage of partitioning for instance.
And more importantly - as mentioned in this answer from Jeremy on stackoverflow,
The dbt-external-tables package supports passing a dictionary of options for BigQuery external tables, which maps to the options documented here.
Which for your case, should be either the quote or allowQuotedNewlines options. If you did choose to use dbt-external-tables your source.yml for this would look something like:
gcs.yml
version: 2
sources:
- name: clickup
database: external_tables
loader: gcloud storage
tables:
- name: task
description: "External table of Snowplow events, stored as CSV files in Cloud Storage"
external:
location: 'gs://bucket/clickup/task/*'
options:
format: csv
skip_leading_rows: 1
quote: "\""
allow_quoted_newlines: true
Or something very similar.
And if you end up taking this path and storing task data on a daily partition like, tasks_2022_04_16.csv - you can access that file name and other metadata the provided pseudocolumns also shared with me by Jeremy here:
Retrieve "filename" from gcp storage during dbt-external-tables sideload?
I find it to be a very powerful set of tools for files specifically with BigQuery.

snmp_exporter fail to generate snmp.yml

I have been trying to configure Prometheus to collect SNMP information and then send that data to Grafana. my problematic configuration to generate the generator.yml file for snmp.yml fails.
I I guess my main problem is that when I reference Huawei MIBs in the generator.yml file, I can only reference part of the MIBs,the rest cannot be referenced, so the OID cannot be searched, resulting in the inability to generate the configuration file.
Does anyone know how I should go about doing this? Or have any experience with generator files or Huawei MIBs?
Here is a screenshot of the software error report.
enter image description here

Move Rails 4 translations to PostgreSQL

I need to store locales of Rails app in DB. And give access for admin to edit locales from admin area of app (activeadmin).
This is idea what I have.
Create file with locales named, e.g. en.yml.lock.
Write translations in this file (duplicate to en.yml in development).
Create table for translations (json or hstore).
Create Capistrano task and load file to DB.
After deploy generate en.yml on server with values from DB.
After editing translations in DB click some button "Regenerate" or use some callback and rewrite en.yml on server.
Reboot application.
What are you think about it? May you have more pretty solution?
And one more question. I think storing this with JSON is easier, but how can I generate form for translation in which user can edit only values, not keys?

Migrating from Lighthouse to Jira - Problems Importing Data

I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end

b2evolution to WordPress move & conversion

I have an old b2evolution blog (v1.10.2) over on a shared hosting account (w/ Plusmail).
I'm slowly migrating all my stuff to a new shared hosting account (w/ cPanel).
I want to export all blog data from my b2evolution and import into a brand new WordPress (v3.1) installation on the new server.
Both accounts have MySQL databases.
Most of the online stuff I'm reading about this have both blogs on the same server, the b2e blog version is much newer than mine, or the WordPress version is below 3.
I'm interested in anyone's constructive suggestions regarding the most painless way to do this.
Thank-you!
EDIT
I ended up using a WordPress CSV import plugin. It's a little tedious preparing your CSV file but it's a rock solid method... you'll get exactly what you put in your spreadsheet imported instantly into WordPress without disturbing any existing posts.
In WordPress install the plugins 'FeedWordPress' and optionally 'FeedWordPress Content Filter'. Once configured, these will allow you to import your b2evolution posts direct from a RSS feed. If your new WordPress users have matching emails as the old b2evolution users, the syndication will automatically assign the posts to them.
Here's how I ended up converting this blog. The procedure below may seem like a lot of work but compared to the amount of time I spent looking for conversion scripts, it was a breeze. I only had to export/import 70 posts and 114 comments so your mileage may vary.
Export the MySQL database from the old b2evolution blog. You only need the table containing your posts (evo_posts). If you want to mess with comments, you'll need that table too (evo_comments). Export those as CSV files.
Download & install CSV Importer plugin version 0.3.5 by dvkob into your new WordPress v3.1 installation. You do not need a fresh or empty WordPress blog... this import will not wipe out anything in WordPress... it will only adding more posts. Back-up your database to be safe. http://wordpress.org/extend/plugins/csv-importer/
Read the installation directions and follow them exactly. At first you may think you only have to move a single php file into your WordPress directory. In fact, you need to copy the plugin plus some stuff within a directory.
Read the documentation and look at the sample CSV files included with the plugin. It shows what column headings you'll need and what each one means.
Open the CSV files you exported from the b2evolution SQL database in Excel. There you can just delete all the unused columns and clean up your data if necessary. Don't forget to rename the column headings as per the CSV plugin requirements.
OPTIONAL: If you want to keep your comments intact and attached to each post, you'll need to match up the post ID from the comment table to the post ID in your new spreadsheet. Each comment gets a new set of columns. One post of mine had 21 comments so I had to add 63 columns... each comment got a username, content, and date/time but you can do this any way you wish. Maybe write an Excel macro that handles this.
Once you get your data all cleaned up and formatted properly, save your Excel sheet as CSV (Windows) format. I tried CSV (comma separated) and it failed to import.
Log into your WordPress Dashboard and your plugin is located under Tools as CSV Import. Upload and hit import... that's it. It took less than one second to add my 70 posts & comments.
NOTES:
Experiment with how this plugin creates your categories. It seems that it wants to create all new categories as a child of "uncategorized". Even if the category already exists on the top level as a sibling of "uncategorized", it still creates a duplicate as a child. Not a big deal, easy to change the categories around in the WP Dashboard after import.
It's fussy about the CSV file format. From Excel, make sure it's saved as CSV (Windows) format.
This may seem like a lot of work but the conversion alternatives caused me more trouble. A day and a half jacking around with trying to get php convertors to work and trying to get an old skin to display the b2e as MT format as compared to only about an hour messing around in Excel... this was a lifesaver.