Transform xlsx to CSV [closed] - csv

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I need to transform an xlsx file to CSV in AIX. The server doesn't have any command for that and I am not allowed to install anything on it. Also, no Python/Perl library for reading Spreadsheets are installed on the server.
Is there still any work around for doing this?
P.S.: it has to work with what is on the server

xlsx is an open xml format, specifications can be found.
Otherwise libraries for perl can be found on CPAN, sources may help to pick some parts.
To start maybe unzip the .xslx it will give a set of xml files and have a look if data can be retrieved.

It is no way to do it or at least not an easy one (like a simple script or command).
Maybe by working on that xml files (Nahuel Fouilleul answer) but it will take too much time.
jugging by reception alone of the question it looks like people don´t want to touch this issue even with a stick.

Related

Can anyone help me download Poké API into MySQL? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 10 days ago.
Improve this question
I’m collaborating with some friends on making a Pokémon database for a Pokédex Challenge and we realised that all that we wanted this database to do Poké API was already doing. However I can’t seem to download it into My SQL, no matter what I try.
I’ve copy and pasted the raw files into the SQL of a new database, an existing one, and tried to import using Pages (I don’t know what other program to use, and likely don’t have it anyway).
EDIT: So I’m trying to recreate the database Poké API so that I have my own copy to edit and query. On their GitHub, Poké API have various ways of-I think-copying the database into different database management tools, but I don’t see mine there. I’ve also tried to copy and paste raw CSV and JSON files that their GitHub provides into my database management tool-which I think is Mariadb-but this hasn’t worked either. I also tried importing it in as a CSV file, but evidently I can’t use Pages on my Mac, which I expected, but have no idea what else to use.
I’m not very experienced with MySQL and am still learning, so if there is information you need to help me that’s missing, just let me know

How can I use the website open library to store information into a database [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am trying to make a web page that gets information about books using HTML and to place the information about books into a database to use it. Any idea of how to take the information from the website open library and store it into a database?
here is the link to the API if needed:
https://openlibrary.org/developers/api
thanks in advance.
If postgreSQL and python is a viable option, LibrariesHacked has a ready-made solution on GitHub for importing and searching Open Library data.
GitHub: LibrariesHacked / openlibrary-search
Using a postgreSQL database it should be possible to import the data directly into tables and then do complex searches with SQL.
Unfortunately the downloads provided are a bit messy. The open library file always errors as the number of columns provided seem to vary. Cleaning it up is difficult as just the text file for editions is 25GB.
That means another python script to clean up the data. The file openlibrary-data-process.py simply reads in the CSV (python is a little more forgiving about dodgy data) and writes it out again, but only if there are 5 columns.

extraction of data from PDF converted XBRL files [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have some XBRL files converted into pdf. Now I want to develop a project that would automatically extract all the data from these files. The project would be developed in JAVA. I am unable to get any lead. Any suggestions regarding how to start the project would be very much appreciated as there is very limited information over the internet regarding this.
I would recommend trying to get the original XBRL (or iXBRL) files rather than use the generated PDFs.
XBRL was designed in the first place in order to be easily machine readable and in order to avoid having to reverse engineer printed documents or PDFs. Attempting to read PDFs means not leveraging the potential of XBRL and may lead to imprecisions and errors.
Then, if you can get these source files, I recommend using an XBRL processor that will take care of all the complexity for you. This will save a lot of time compared to use a raw XML processor. It is likely that there are XBRL libraries written for Java.
I am sorry not to be able to give you a better answer, but I hope this helps you get started.

Huge geoJSON file -- cannot even edit [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a geoJSON file that needs to be edited. For example, I need to do a Find and Replace operation. However, I cannot even open the file (150 MB) in some applications. With TextEdit (I'm on a Mac), I can open the file, but the app stops responding and freezes almost immediately when I try any Find and Replace operation.
The file contains data (Points) that I'd like to map (I think I will use Leaflet), so eventually I need to transfer the file to my server. Given the size of the file, will I run into any problems there and then mapping the points in a browser?
Any advice or pointers on what to do would be appreciated.
Check out this link on replacing text from the command line. You should be able to use sed to replace those instances without having to load it in memory (I think).

In Mediawiki, where are my changes actually stored [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When I actually write in the wiki, where are the changes actually being stored?
I have searched the source code for keywords that i have actually written and i cant find it, which obviously means its being stored where it cannot be searched directly.
I have made changes to it i.e. written in it, but sourcetree does not seem to be recognising it.
Do you mean the site's contents? They are stored in a database file which is read and written by the code.
It would be quite unmaintainable, if not outright dangerous, to mix user-submitted data with executable code.
They are stored in a database table called page (or something similar)