I have an old b2evolution blog (v1.10.2) over on a shared hosting account (w/ Plusmail).
I'm slowly migrating all my stuff to a new shared hosting account (w/ cPanel).
I want to export all blog data from my b2evolution and import into a brand new WordPress (v3.1) installation on the new server.
Both accounts have MySQL databases.
Most of the online stuff I'm reading about this have both blogs on the same server, the b2e blog version is much newer than mine, or the WordPress version is below 3.
I'm interested in anyone's constructive suggestions regarding the most painless way to do this.
Thank-you!
EDIT
I ended up using a WordPress CSV import plugin. It's a little tedious preparing your CSV file but it's a rock solid method... you'll get exactly what you put in your spreadsheet imported instantly into WordPress without disturbing any existing posts.
In WordPress install the plugins 'FeedWordPress' and optionally 'FeedWordPress Content Filter'. Once configured, these will allow you to import your b2evolution posts direct from a RSS feed. If your new WordPress users have matching emails as the old b2evolution users, the syndication will automatically assign the posts to them.
Here's how I ended up converting this blog. The procedure below may seem like a lot of work but compared to the amount of time I spent looking for conversion scripts, it was a breeze. I only had to export/import 70 posts and 114 comments so your mileage may vary.
Export the MySQL database from the old b2evolution blog. You only need the table containing your posts (evo_posts). If you want to mess with comments, you'll need that table too (evo_comments). Export those as CSV files.
Download & install CSV Importer plugin version 0.3.5 by dvkob into your new WordPress v3.1 installation. You do not need a fresh or empty WordPress blog... this import will not wipe out anything in WordPress... it will only adding more posts. Back-up your database to be safe. http://wordpress.org/extend/plugins/csv-importer/
Read the installation directions and follow them exactly. At first you may think you only have to move a single php file into your WordPress directory. In fact, you need to copy the plugin plus some stuff within a directory.
Read the documentation and look at the sample CSV files included with the plugin. It shows what column headings you'll need and what each one means.
Open the CSV files you exported from the b2evolution SQL database in Excel. There you can just delete all the unused columns and clean up your data if necessary. Don't forget to rename the column headings as per the CSV plugin requirements.
OPTIONAL: If you want to keep your comments intact and attached to each post, you'll need to match up the post ID from the comment table to the post ID in your new spreadsheet. Each comment gets a new set of columns. One post of mine had 21 comments so I had to add 63 columns... each comment got a username, content, and date/time but you can do this any way you wish. Maybe write an Excel macro that handles this.
Once you get your data all cleaned up and formatted properly, save your Excel sheet as CSV (Windows) format. I tried CSV (comma separated) and it failed to import.
Log into your WordPress Dashboard and your plugin is located under Tools as CSV Import. Upload and hit import... that's it. It took less than one second to add my 70 posts & comments.
NOTES:
Experiment with how this plugin creates your categories. It seems that it wants to create all new categories as a child of "uncategorized". Even if the category already exists on the top level as a sibling of "uncategorized", it still creates a duplicate as a child. Not a big deal, easy to change the categories around in the WP Dashboard after import.
It's fussy about the CSV file format. From Excel, make sure it's saved as CSV (Windows) format.
This may seem like a lot of work but the conversion alternatives caused me more trouble. A day and a half jacking around with trying to get php convertors to work and trying to get an old skin to display the b2e as MT format as compared to only about an hour messing around in Excel... this was a lifesaver.
Related
I'm new to coding and its a lot of try and error. Now I'm struggling with html tables.
For explanation: I am building an Electron Desktop application for stocks. I am able to input the value via GUI in an html table, and also export this as Excel file. But, every time I reload the app, all data from the table are gone. It would be great to save this data permanently, and simply add new data to the existing table after an application restart.
What's the best way to achieve this?
In my mind, it would the best way to overwrite the existing Excel file with the new work (old and new data from the table), because it would be easy to install the tool on a new PC and simply import the Excel file to have all data there. I don't have access to a web server, so I think a local Excel file would be better than a php solution.
Thank you.
<table class="table" id="tblData" >
<tr>
<th>Teilenummer</th>
<th>Hersteller</th>
<th>Beschreibung</th>
</tr>
</table>
This is the actual table markup.
Your question has two parts, it seems to me.
data representation and manipulation
data persistence
For #1, I'd suggest taking a look at Tabulator, in particular its methods of importing and exporting data. In my projects, I use the JSON format with Tabulator and save the data locally so it persists between sessions.
So for #2, how and where to save the data? Electron has built-in methods for getting the paths to common user directories. See app.getPath(name). Since it sounds like you have just one file to save, which does not need to be directly accessible to the user, appData is probably a good place to store it.
As for the "how" to store it – you can just write a file to that path using Node fs, though I like fs-jetpack too. Tabulator can save data as well.
Another way to store data is with electron-store. It works very well, though I've only used it with small amounts of data.
So the gist is that when your app starts, it loads the data and when the app quits, it saves the data, along with any changes which have been made, though I'd suggest saving after every change.
So, there are lots of options depending on your needs.
I have referred related questions, but the solutions didn't work at my case. Hence, raising a question.
I have imported csv file to import products in our magento store.
Products are imported successfully from admin panel import functionality and also from Data flow - Profile functionality
But, There are no products getting displayed at front end nor at admin panel.
If anyone has faced such issue, please help to solve the issue.
Please note : All products are simple new products.
Thank You.
I've also faced such issue. So you need to flush all Magento cache and reindex data. It can solve your problem. But first of all, you need to check in the database that products are stored or not by given below query.
select count(*) from 'catalog_product_entity';
Or you can check given below URL for more detail:
https://magento.stackexchange.com/questions/53157/products-exist-in-the-database-but-are-not-showing-in-backend-or-frontend
In my case,
I have created one new spreadsheet in open office and have copied all the data of import file.
I saved it again with UTF-8 encoding and .csv format.
And then, i have tried to import it again.
With this process, i was able to import all the products successfully as magento admin panel has considered it as completely new file for import.
Thank You.
I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end
I'm trying to import a rather large csv file (90mb, 255,000 records) into wordpress using the CSV Importer plugin.
I realize this plugin wasn't made for this large of files, but I can't seem to figure out any other option here. Using phpmyadmin will only let me import certain things such as the post title, etc... I also need to add some postmeta along with each post in the csv.
I've tried everything I can think of, from uploading the file to my server via ftp, and then editing the plugin to read the local file (just stops working)... I've changed the php max execution time, I've change the maximum file size, all to no avail.
Any help is appreciated here.
Aside from breaking the CSV file into multiple parts, try importing the data yourself, with your own custom PHP script.
See wp_insert_post() documentation, and of course, PHP's fgetcsv().
I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.