How to migrate taxonomy terms along with parent terms using Migrate API.
id parent child
1 P1 C1
2 P1 C2
As above, P1 has 2 child C1 and C2.
How can the migrate api be used to create config file to get the import?
Hi you need to take care about taxonomy and wich version you try to migrate. you got a possibility but you need to read the documentation and install it if you didn't got yet, extension here https://www.drupal.org/project/taxonomy_import requirement drupal version 8. You can and need to manager your site install drush if shell prompt doesn't care you :).
Related
This is a long story and I am a little bit stack, I have tried many things and I was able to move forward, question is what now?
This is the full story:
I started working in a .net core project, 2.1. I installed for that visual studio 2019 and other tools. The important thing is that I installed SQL Server 2017 developers edition (the free one) with the default parameters, that version created an instance called MSSQLServer. Unfortunately, the project needed a different instance name which was MSSQL2017, so I tried to change the name of the instance, I couldn't because it is a free version, reinstalling it did not work either and a few other things that I tried, the important one is that a colleague changed the default sql string to make it compatible with my installation, in order to see if the problem was the setup or something else. It worked, and the tables and database was created for that project. So I managed to create another instance calling it with the proper name MSSQL2017, created the users and so on. When I go to Ms SQL Server Manager Studio, I notice that the tables are not created, so I run profile and run the project again, and this is what I get 'Cannot insert duplicate key row in object 'sys.syssingleobjrefs' with unique index 'clst'. The duplicate key value is (67439, 76, 101).' and that's when I am lost, I can't find what sys.syssingleobjrefs refers to so I have no idea how to move on to fix this mess. Any help?
update: so sys.syssingleobjrefs is a system base table, that I can't see its content, how do I modify it?
select * from sys.syssingleobjrefs does not work
syssingleobjrefs is a system table accessible only through Administrator mode.
You have to use sqlcmd -A in order to access this table.
https://learn.microsoft.com/en-us/sql/relational-databases/system-tables/system-base-tables?view=sql-server-ver15
I'm trying to configure some rolebindings in OpenShift that I'm lead to believe would require the use of some OpenShift system groups. However when I try adding a role to any of the system groups, they can't be found.
One specific example that I'm trying to configure is a rolebinding that allows an image to be pulled from my 'demos' project into another project (including newly created ones).
My research into this has lead me to this particular page in the documentation, which describes a 'system:serviceaccount' group that sounds suited to my needs.
https://docs.okd.io/3.11/dev_guide/service_accounts.html#dev-sa-user-names-and-groups
Based on some of the examples provided on this page, I'm currently trying to use the following command to grant these permissions.
oc policy add-role-to-group system:image-puller system:serviceaccount -n demos
I expected this would allow me to pull in one of the images that I've got stored in the 'demos' project and deploy it into another project. However the command returns the following.
Warning: Group 'system:serviceaccount' not found
role "system:image-puller" added: "system:serviceaccount"
The role is still seemingly added to the group for the 'demos' project, however it's still not possible for me to pull one of the images stored in there into another project.
i have a project in which i need to fetch data from different tables which are located in different portlets in a plugin project.
suppose we have two portlet A and portlet B, which have tables A1 and B1 respectively.
i want to fetch data from both portlets.
Can any one help?
I have read about custom sql query ...http://www.liferaysavvy.com/2013/02/getting-data-from-multiple-tables-in.html
but still cant find a proper solution....
A good habit is using the separately portlet and service-portlet (model). Depends on how the project is extensive and what a build tool you are using (ant, maven) I think, that advantage is that you have implementation of the operations to DB visible for anyone plugin (common JAR file in lib directory and portlet-service in webapps) in portlet project.
More about the service-builder >> Service-builder
<<
I've started using git with a small dev team of people who come and go on different projects; it was working well enough until we started working with Wordpress. Because Wordpress stores a lot of configurations in MySQL, we decided we needed to include that in our commits.
This worked well enough (using msyql dump on pre-commits, and pushing the dumped file into mysql on post-checkout) until two people made modifications to plugins and committed, then everything broke again.
I've looked at every solution I could find, and thought Liquibase was the closest option, but wouldn't work for us. It requires you to specify schema in XML, which isn't really possible because we are using plugins which insert data/tables/modifications automatically into the DB.
I plan on putting a bounty on it in a few days to see if anyone has the "goldilocks solution" of:
The question:
Is there a way to version control a MySQL database semantically (not using diffs EDIT: meaning that it doesn't just take the two versions and diff it, but instead records the actual queries run in sequence to get from the old version to the current one) without the requirement of a developer written schema file, and one that can be merged using git.
I know I can't be the only one with such a problem, but hopefully there is somebody with a solution?
The proper way to handle db versioning is through a version script which is additive-only. Due to this nature, it will conflict all the time as each branch will be appending to the same file. You want that. It makes the developers realize how each others' changes affect the persistence of their data. Rerere will ensure you only resolve a conflict once though. (see my blog post that touches on rerere sharing: http://dymitruk.com/blog/2012/02/05/branch-per-feature/)
Keep wrapping each change within a if then clause that checks the version number, changes the schema or modifies lookup data or something else, then increments the version number. You just keep doing this for each change.
in psuedo code, here is an example.
if version table doesn't exist
create version table with 1 column called "version"
insert the a row with the value 0 for version
end if
-- now someone adds a feature that adds a members table
if version in version table is 0
create table members with columns id, userid, passwordhash, salt
with non-clustered index on the userid and pk on id
update version to 1
end if
-- now some one adds a customers table
if version in version table is 1
create table customers with columns id, fullname, address, phone
with non-clustered index on fullname and phone and pk on id
update version to 2
end if
-- and so on
The benefit of this is that you can automatically run this script after a successful build of your test project if you're using a static language - it will always roll you up to the latest. All acceptance tests should pass if you just updated to the latest version.
The question is, how do you work on 2 different branches at the same time? What I have done in the past is just spun up a new instance that's delimited in the db name by the branch name. Your config file is cleaned (see git smudge/clean) to set the connection string to point to the new or existing instance for that branch.
If you're using an ORM, you can automate this script generation as, for example, nhibernate will allow you to export the graph changes that are not reflected in the db schema yet as a sql script. So if you added a mapping for the customer class, NHibernate will allow you to generate the table creation script. You just script the addition of the if-then wrapper and you're automated on the feature branch.
The integration branch and the release candidate branch have some special requirements that will require wiping and recreating the db if you are resetting those branches. That's easy to do in a hook by ensuring that the new revision git branch --contains the old revision. If not, wipe and regenerate.
I hope that's clear. This has worked well in the past and requires the ability for each developer to create and destroy their own instances of dbs on their machines, although could work on a central one with additional instance naming convention.
At my company we have several developers all working on projects internally, each with their own virtualbox setup. We use SVN to handle the source, but occasionally run into issues where a database (MySQL) schema change is necessary, and this has to be propagated to all of the other developers. At the moment we have a manually-written log file which lists what you changed, and the SQL needed to perform the change.
I'm hoping there might be a better solution -- ideally one linked to SVN, e.g. if you update to revision 893 the system knows this requires database revision 183 and updates your local schema automagically. We're not concerned with the data being synched, just the schema.
Of course one solution would be to have all developers running off a single, central database; this however has the disadvantage that a schema change could break everyone else's build until they do an svn up.
One option is a data dictionary in YAML/JSON. There is a nice article here
I'd consider looking at something like MyBatis Schema Migration tools. It isn't exactly what you describe, but I think it solves your problem in an elegant way and can be used without pulling in core MyBatis.
In terms of rolling your own, what I've always done is to have a base schema file that will create the schema from scratch as well as a delta file that appends all schema changes as deltas, separated by version numbers (you can try and use SVN numbers, but I always find it easier just to manually increment). Then have a schema_version table, which contains that information in it for the live database, the canonical schema file will have that information in it and have a script that will run all changes subsequent to the existing DB version from the delta script.
So you'd have a schema like:
-- Version: 1
CREATE TABLE user (
id bigint,
name varchar(20))
You have the tool manage the schema version table and see something like:
> SELECT * FROM schema_version;
1,2011-05-05
Then you have a few people add to the schema and have a delta file that would look like:
-- Version: 2
ALTER TABLE user ADD email varchar(20);
-- Version: 3
ALTER TABLE user ADD phone varchar(20);
And a corresponding new schema checked in with:
-- Version: 3
CREATE TABLE user (
id bigint,
name varchar(20),
email charchar(20),
phone varchar(20))
When you run the delta script against a database with the initial schema (Version 1), it will read the value from the schema_version table and apply all deltas greater than that to your schema. This gets trickier when you start dealing with branches, but serves as a simple starting point.
There are a couple approaches I've used before or currently use:
Sequential Version Number
Most that use this approach have a separate program that grabs a version number from the database, and then executes any statements associated with database versions higher than that number, finally updating the version number in the database.
So if the version is 37 and there are statements associated with version 1 through 38 in the upgrading application, it will skip 1 through 37 and execute statements to bring the database to version 38.
I've seen implementations that also allow for downgrade statements for each version to undo what the upgrade did, and this allows for taking a database from version 38 back down to version 37.
In my situation we had this database upgrading in the application itself and did not have downgrades. Therefore, changes were source-controlled because they were part of the application.
Directed Acyclic Graph
In a more recent project I came up with a different approach. I use classes that are nodes of a directed acyclic graph to encapsulate the statements to do specific upgrades to the database for each specific feature/bugfix/etc. Each node has an attribute to declare its unique name and the names of any nodes on which it was dependent. These attributes are also used to search the assembly for all upgrade nodes.
A default root node is given as the dependency node for any nodes without dependencies, and this node contains the statements to create the migrationregister table that lists the names of nodes that have already been applied. After sorting all the nodes into a sequential list, they are executed in turn, skipping the ones that are already applied.
This is all contained in a separate application from the main application, and they are source-controlled in the same repository so that when a developer finishes work on a feature and the database changes associated with it, they are committed together in the same changeset. If you pull the changes for the feature, you also pull the database changes. Also, the main application simply needs a list of the expected node names. Any extra or missing, and it knows the database does not match.
I chose this approach because the project often has parallel development by multiple developers, with each developer sometimes having more than 1 thing in development (branchy development, sometimes very branch). Juggling database version numbers was quite the pain. If everybody started with version 37 and "Alice" starts on something and uses version 38 so it will change her database, and "Bob" also starts on work that has to change the database and also uses version 38, someone will need to change eventually. So let's say Bob finishes and pushes to the server. Now Alice, when she pulls Bob's changeset, has to change the version for statements to 39 and set her database version back to 37 so that Bob's changes will get executed, but then hers execute again.
But when all that happens when Alice pulls Bob's changeset is that there's simply a new migration node and another line in the list of node names to check against, things just work.
We use Mercurial (distributed) rather than SVN (client-server), so that's part of why this approach works so well for us.
An easy solution would be to keep a complete schema in SVN (or whatever library). That is, every time you change the schema, run MySQL "desc" to dump out descriptions of all the tables, overwrite the last such schema dump with this, and then commit. Then if you run a version diff, it should tell you what changed. You would, of course, need to keep all the tables in alphabetical order (or some predictable order).
For a different approach: Years ago I worked on a project for a desktop application where we were periodically sending out new versions that might have schema changes, and we wanted to handle these with no user intervention. So the program had a description of what schema it expected. At start up it did some metadata calls to check the schema of the database that it actually had and compared these to what it expected. If then automatically updated the schema to match what it expected. Usually when we added a new column we could simply let it start out null or blank, so this required pretty much zero coding effort once we got the first version to work. When there was some actual manipulation required to populate new fields, we'd have to write custom code, but that was relatively rare.