How can i convert Babylon glossaries (*.BGL) to a database table (mySQL) ?
I don't know if there are any "easy" ways per say to do it, but two ways come to mind depending on what resources you have.
Export the BGL data directly to a CSV and then set up your database structure within MySQL and import the CSV.
Or
Export the BGL data to MS Excel and then from there you there are multiple formats that you can export to that are importable by MySQL.
There are also several paid solutions that are out there that you can purchase to make the transition a little easier, but I don't know of any that are free, and I can't speak to the quality of any of them.
You can use PyGlossary
http://sourceforge.net/projects/pyglossary/
Tested and working.
Description of PyGlossay:
Working on glossarys (dictionary databases) using python. Including editing glossarys and converting theme between many formats such as: Tabfile StarDict format xFarDic format "Babylon Builder" source format Omnidic format and etc.
Related
On our Wordpress site, we use a plugin called s2member and it stores the levels (roles) of our clients as well as the times they were assigned a specific level in our database. I would like to create a table that shows when a user was assigned a specific level. I'm having a challenge getting the data I need because of the way the data is stored in the field. It stores all of the levels along with the associated dates and times when a user's level was changed in one field. In addition, it stores all of the times as Unix timestamps. Here's an example of a typical field associated with a client:
a:20:{s:15:"1562695223.0001";s:6:"level0";s:15:"1562695223.0002";s:6:"level1";s:15:"1562695223.0003";s:6:"level2";s:15:"1562695223.0004";s:6:"level3";s:15:"1577906312.0001";s:11:"ccap_prepay";s:15:"1596575898.0001";s:12:"-ccap_prepay";s:15:"1596575898.0002";s:13:"ccap_graduate";s:15:"1596575898.0003";s:11:"ccap_prepay";s:15:"1596575898.0004";s:7:"-level3";s:15:"1597196952.0001";s:14:"-ccap_graduate";s:15:"1597196952.0002";s:12:"-ccap_prepay";s:15:"1597196952.0003";s:13:"ccap_graduate";s:15:"1597196952.0004";s:11:"ccap_prepay";s:15:"1598382433.0001";s:14:"-ccap_graduate";s:15:"1598382433.0002";s:12:"-ccap_prepay";s:15:"1598382433.0003";s:11:"ccap_prepay";s:15:"1598382433.0004";s:6:"level3";s:15:"1605290551.0001";s:12:"-ccap_prepay";s:15:"1605290551.0002";s:11:"ccap_prepay";s:15:"1605290551.0003";s:13:"ccap_graduate";}
There are four columns in this table: umeta_id; user_id; meta_key; meta_value. The data above is stored in the meta_value column.
You'll notice that it also has multiple ccap_* entries. CCAP stands for custom capapability and I would like to be able to chart those assignments and associated times as well.
Do you have any idea how I can accomplish this?
Thank you for any help you can give.
I talked to an engineer about this and he told me that I would need to learn Python and I believe he said I would need to learn how to also use Pandas and Numpy to extract the data I need but he wasn't exactly sure. I started taking a data analyst course on Coursera but I still haven't learned what I need to learn and it's already been several months. It would be great if someone could provide a solution that I could implement more quickly and use on an ongoing basis.
If there's a way to accomplish my goal by exporting this table to a CSV file and using Microsoft Excel or Google Sheets, I'm open to that too.
Here's an image of the table (if it helps):
Database table
Here's an example of my desired output:
Desired output
In my desired output, I used Excel and created a column that converts the Unix timestamp to a short date and another column where I used a nested IF statement to convert the CCAP or level to its meaning that we understand internally.
I am building a web application that will run off of data that is produced for the public by a governmental agency. The issue is that the csv file that houses the data I need is a 2,000 column beast of a file. The file is what it is, I need to find the best way to take it and modify it. I know I need to break this data up into much smaller tables within MySQL, but I'm struggling with the best way to do this. I need to make this as easy as possible to replicate for next year when the data file is produced again (and every year after). I've searched for programs to help, and everything I've seen deals with a huge amount of rows, not columns. Has anyone else dealt with this problem before? Any ideas? I've spent the last week color coding columns in excel and moving data to new tabs, but this is time consuming, will be super difficult to replicate and I worry it leaves me open for copy and paste errors. I'm at a complete loss here!
Thank you in advance!
I suggest that you use functions in excel to give every column an automatic name "column1", "column2", "column3", etc.
After that import the entire csv file into MySQL.
Decide on which columns you want to group together into separate tables. This is the longest step and no program can help you manage this part.
Query your massive SQL table to get just the columns you want for each group. Export these queries to CSV and then import them as new tables in your database.
At the end, if you want, query all the columns you didn't put into separate groups. Make this a new table in the database and delete the original table to save on storage space.
Does this government csv file get updated and republished in the same format every time? If so you'll need to write a script to do all of the above automatically.
Let's say that on a daily basis I download a CSV file and I would like to show in the dashboard the differences between the two versions of the same file(today and the day before) in the dashboard such as for example, the number of new rows added to that file(these could be defined as new cases), or the number of cells that were changed from one category to another, such as 'Still Ill' to 'Recovered'.
Is this possible to achieve through a DAX Expression or through a specific transformation that is done on import? Or should I somehow append the csv data to the original one and PowerBI would remove the duplicates?
I've attempted to solve the problem through the following three questions but somehow couldn't find the answer I needed there:
Detect differences between two versions of the same table
Python: Match values between two csv files
Issue computing difference between two csv files
It's possible if you load in both versions of the CSV file into the same PBIX.
Otherwise, the answer is Power BI is not a database.
Whats the best way to get JSON flat files into SQL Server using SSIS?
Currently I've tried parsing the data in a script component, but with amount of JSON files I'm parsing (around 120 at a time) it takes upward of 15 minutes to get the data in. I also don't consider this very practical.
Is there a way to combine the powers of SSIS and the OPENJSON command in SQL server? I'm running SQL server 2016 so I'm trying to leverage that command in the hopes that it works faster.
Also, I have no problem getting the JSON data in without losing format.
Looks like this:
Is there a way for me to leverage that and get JSON format into a more Normalized format.
This guy has an example to splitting a JSON string that is in a column that would be a good easy basis.
SSIS Data flow task runs by itself but not as part of package
You would want a class referencing a class if you have subclasses. Kind of like an order class reference a line item class.
In that example, you would have a DF on foreach order and within that a foreach lineitem including the order ID.
I had a good example with Survey Monkey but I can't find it right now.
I actually didn't use data flows with that example and just directly loaded from C#.
Here is the survey monkey class structure i referenced above:
Trouble using all members in a class. Why can I only use the list?
Good luck.
Actually figured this out.
I bring the files in one at a time, with all the JSON text in a single row.
From there I can use the OPENJSON command in SQL Server 2016.
Hello i am trying to export data from my remote database using mysql work bench.
I have been able to export successfully but the records re not properly formatted into their right columns.
Please is there any way to properly place the text in their columns,
Find below a screen shot
In the above file there are two fields insured name and registeration number.
They are jumbled together.
Is there a way i can properly format the out put
Thanks
Since it worked... I'll post it as an answer :)
Both the export process and import process need to have their column deliminators match. CSV is normally with commas, however tabs or \t is also common. When exporting, look at the various properties during the export process, and I'm betting you can find an option to change the character.