I have a rather large spreadsheet that lists out all the images on a client site along with the image URL. It has updated information for the image alt attributes as well as new captions.
Since the list is so large, I'm looking for something like an SQL query or something I could run that would import the necessary information from the spreadsheet and update the related image metadata in the database. I'm not a backend guy, but the guy who usually handles this kind of thing is unavailable for a while, so it's fallen to me. Is this something I can do in MySQL?
edit: I found this post that deals with what seems to be a similar issue. Would I be able to use this method for the image captions?
In case anyone happens to want to solve a similar issue, here's how I solved it.
This post details where alts and captions live in the WordPress database: https://wordpress.stackexchange.com/questions/1777/are-captions-stored-anywhere. I used that as the basis for my process.
First, I ran the following query on the wp_postmeta table and copy/pasted the results into a new sheet in my spreadsheet.
SELECT * FROM 'wp_postmeta' WHERE meta_key LIKE '_wp_atttached_file'
Second, I ran a similar query on the table wp_posts and also put the results in a new sheet.
SELECT * FROM 'wp_posts' WHERE 'post_type' LIKE 'attachment'
I then used a Google Sheets (the spreadsheet software I'm using) add-on to merge each sheet with my main sheet. The posts data I merged on the basis of matching URLs (the data I have to work with in my original spreadsheet). Then I merged the postmeta data based on matching post IDs.
Finally, once I had all of that, I could download my file as a .csv file and import it into the postmeta and posts tables. I made sure to do my first import on a local install of the site so as to not screw with a live site. I'm using Sequel Pro for managing databases, and it has a handy CSV import tool that allows you to do an UPDATE on the postmeta and posts tables. My import used the post IDs and meta_key (for the alts) and post_ID/ID (for the captions) to match up the data.
That's the process that worked for me, though I still did have a dozen images with funky URLs that I just updated manually rather than messing around with Regex.
Related
I've made a website using ACF and all the content has been copied over from the default WordPress editor, to the new ACF fields.
The trouble is my client didn't delete the content from the default WordPress editor as they went along. Subsequently, it's causing loads of broken links.
Long story short, is there a way to delete all content from the default post editor site wide, rather than update every single post?
To Delete all content from Wordpress posts you will need to export all posts ID in a CSV file.
In PhpMyAdmin you could use
SELECT ID FROM `wp_posts`
Then export the query result in a CSV file. Open the CSV and create a new column for the content which you want empty.
To test my answer, I installed the free version of the 1st WordPress plugin I found for the search query csv import on https://wordpress.org/plugins/
Then on your WordPress dashboard go to this newly installed plugin menu and import the CSV file previously created.
Let the importer update existing posts and define the content as the only column to update.
Before updating All content, I would make a test of the importer configuration on 1 unpublished test post to make sure ACF field aren't emptied with the import.
Which may not happen if you don't miss a configuration on the importer.
In MySQL this is a straight task for deleting all content from the default post, i.e using PHPMyAdmin, SQL section after selecting WP database (i.e find name in wp-config.php):
DELETE * FROM wp_posts
DELETE * FROM wp_postmeta
Please note this will remove definitively ALL content from your website, always take a backup.
You can also find a manual plugin for doing this here.
I'm trying to transfer a ton of data from an old sharepoint platform through access to sharepoint 365.
When I do this, I get an error like this:
Picture of access
As far as I can tell, this means that theres either a server problem, or something was misspelled.
I can transfer several other views without any problems, so im hoping that someone out there can shed some light on this.
Cause of this problem can be in names of attachments. Just now I had the same problem importing the SP2007 list to Access and I find out, that I can import everything except two records with attachment, which name contained + symbol. I had to delete the attachment, then the import was OK.
Bigger problem was to find problematic records - I had to make special view filtered by date of last change and then tryied importing records from different time scopes, until I identified records causing the trouble.
I'm currently familiarizing myself with Google BigQuery by working through the examples at https://cloud.google.com/bigquery/web-ui-quickstart. Doing a query over the pubic datasets runs fine.
I run into problems when uploading custom data into a new table through the WebUI. I create a new dataset and table, and upload the csv file provided with the example case. As in the example I input the schema and submit the file. Now the upload window stays on top and turns grey as if it's working. Nothing seems to happen afterwards though. When clicking away the upload window after a long wait, the table seems to be created in the tree on the left. However, when clicking on the table an error is shown:
"Unable to find table: ndwtest-984:csvtest.csvdata"
This seems like a trivial action, however I cannot seems to get it to work. I've tried varies different files, uploaded the file to Google Cloud Storage first and played around with the advanced options the last two days, but keep getting the same error.
Help would be much appreciated.
Some steps to help you:
billing must be enabled
you need to choose to upload one single TXT file from the example eg: yob2013.txt and not the zip file
make sure the schema is entered as text: name:string,gender:string,count:integer
on the last screen of the wizard you don't need to change the default CSV option parameters (for demo purposes works as it is)
I just tried the example, and it does work for me. In case you still have errors, than you can check your Job History menu in the Web UI, direct link would be, warning you need to put your Id in the link.
https://bigquery.cloud.google.com/jobs/YOUR_ANONYMOUS_PROJECT_ID_HERE?pli=1
I run a popular forum where one of the members who has made a lot of awesome contributions recently contacted us. He has posted several hundred images from his webshots gallery, but the service is changing as are all the images. I need to change all the image src paths in all of his posts in our mysql database.
He was given the opportunity to download all the images which he has given to me. Because I will be having to do a lot of these changes in production I need to make sure that I don't screw this up.
The image src in his posts look similar to this, where 0103935217 I believe is his user ID.
http://inlinethumb25.webshots.com/47576/2156388770103935217S500x500Q85.jpg
The images downloaded from the service look like this. Notice the S500x500Q85 has been replaced with a random string.
2156388770103935217Reacil_fs.jpg
So I have two tasks:
I need to rename the all the files I've put on my server removing the random characters and the _fs designation.
I need to change the file paths in all of his posts removing domain and container and replacing it with mine. In addition I need to remove the S500x500Q85 designation.
For 1. I have a regex but I'm unsure how to do the replacement 0103935217\w+?_fs
For 2. I know my query needs to be something along the lines of the below. I'm a little unsure how to do this though, is it with a regex?
UPDATE posts SET post_body = replace(post_body, '','') WHERE user_id = 1234
I have a client who needs to have data imported into drupal from a large spreadsheet. They already had the modules setup and I have the data all in the right tables... so it seems. I have run my own custom scripts to manipulate what was in the spreadsheets and exported / imported it into the drupal databases.
However, when I go to view that type of content in the backend, it shows there are multiple pages of data but displays "No content available." in every page. Here are the tables I imported to:
// for the business listing node type
field_data_field_bd_address_city
field_data_field_bd_address_street
field_data_field_bd_address_zip
field_data_field_bd_business_type
field_data_field_bd_contact_email
field_data_field_bd_contact_name
field_data_field_bd_description
field_data_field_bd_image
field_data_field_bd_listing_type
field_data_field_bd_phone
field_data_field_bd_tags
field_data_field_bd_website
// drupal default tables
node
node_comment_statistics
taxonomy_index
taxonomy_term_data
taxonomy_term_hierarchy
taxonomy_vocabulary
Am I missing any tables that I need to import data into to make connections?
I had this problem before and it took me a while to solve it. This was before anyone had mentioned the feeds module to me so I thought it was my only option.
If you're going to upload straight into your database you need to enter the data into the revisions tables as well. So you would have:
// for the business listing node type
field_data_field_bd_address_city
field_data_field_bd_address_street
field_data_field_bd_address_zip
field_data_field_bd_business_type
field_data_field_bd_contact_email
field_data_field_bd_contact_name
field_data_field_bd_description
field_data_field_bd_image
field_data_field_bd_listing_type
field_data_field_bd_phone
field_data_field_bd_tags
field_data_field_bd_website
And also:
// for the business listing node type
field_revision_field_bd_address_city
field_revision_field_bd_address_street
field_revision_field_bd_address_zip
field_revision_field_bd_business_type
field_revision_field_bd_contact_email
field_revision_field_bd_contact_name
field_revision_field_bd_description
field_revision_field_bd_image
field_revision_field_bd_listing_type
field_revision_field_bd_phone
field_revision_field_bd_tags
field_revision_field_bd_website
The same goes for the node table. This took me a while to work out and worked for me. Typically someone then mentioned the feeds module which would have saved me time but I thought I'd share what worked for me.
Instead of manually importing the data directly into the database and trying to figure out how to satisfy all the relational dependencies to make it work, I would suggest using the Feeds module
If you wish to continue with the manual process, perhaps this ER diagram of the Drupal database will help (keep in mind it is a bit dated and was likely based on earlier versions of Drupal 7).
I figured it out. I had to add data to the node_revision table with status of 1 set to all the nodes and it worked out just fine after that!