Elasticsearch error Magento 2.4 reindex Catalog Search index - elasticsearch-7

Suddenly, I think we made no changes, we have an error reindexing Catalog Search:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"mapper [category_position_25536] cannot be changed from type [text] to [integer]"}],"type":"illegal_argument_exception","reason":"mapper [category_position_25536] cannot be changed from type [text] to [integer]"},"status":400}
I reindexed a couple of times, used bin/magento index:reset, removed the indexes like this:
curl -XDELETE https://host:9200/prefix_product_4_v904 (this for all indexes)
For now there is no error doing a search, there is just no results. Even on categorypages there are no products showing up.
Can someone point me in the right direction?

Related

Couchbase CBQ Silent Fail

I'm trying to query a secure bucket via couchbase CBQon windows.
I've got couchbase/bin in my PATH and from the command line I am able to run this:
cbq -engine=http://localhost:8091 -c=hug_contenthub:password
seems to connect OK:
Connected to : http://localhost:8091/. Type Ctrl-D or \QUIT to exit.
Path to history file for the shell : C:\Users\kevin\.cbq_history
cbq>_
From here on I can't do anything except quit. I tried several commands:
cbq> select 1=1
> SELECT DISTINCT type FROM `beer-sample`
> create primary index on `beer-sample`
They are all ignored. No feedback at all. The only thing that changes is cqb> becomes "....>" the cqb bit is stripped off.
What am I missing here?
I'm relatively new to Couchbase Server, and I'm used to the old MS SQL ways, and so I think I had a similar problem.
My solution: I added a 'semicolon' at the end of each query.
I also tend to prefer to use the new Query Workbench tool instead of cbq when I'm just writing and tweaking N1QL queries. (But maybe that's just me).

Error in Magmi configurable product import with Configurable Item processor

I am trying to import in Magento using Magmi a set of configurable and simple products. I have followed all the necessary steps correctly (I guess) that are described here http://wiki.magmi.org/index.php?title=Configurable_Item_processor.
Here is a test file with data that I load in the importer https://docs.google.com/spreadsheets/d/17_fWYfYmSiXdLYp80P0kafPrFYzwzp7JHGNlHQTM0S4/edit?usp=sharing
Test cases:
Running the import without Configurable Item processor works just fine but does not create the link between the simple products and the configurable ones in backend (which makes perfect sense).
Running with Configurable Item processor with all the combinations of plugin options (Perform simples/configurable link y/n & auto match simples skus before configurable y/n) yields all the time the fallowing errors:
1 SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'attribute_id' cannot be null -
2 SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'attribute_id' cannot be null - ERROR ON RECORD #3
The error is triggered by this line of code
INSERT INTO `catalog_product_super_attribute` (`product_id`,`attribute_id`,`position`) VALUES (?,?,?)
that you can find in /plugins/base/itemprocessors/configurables/magmi_configurableprocessor.php:246
I have searched for a solution and found this one http://blog.mdnsolutions.com/index.php/magmi-not-importing-configurable-products/ where actually the guy solved the issue by replacing that line above with:
INSERT INTO `catalog_product_super_attribute` (`product_id`,`attribute_id`,`position`) VALUES (:a,:b,:c)
It is not working.
There is also another question on this issue posted here Magmi Configurable Products Importation however the solution appears to be very "vague".
Working with:
Magento - 1.9.0.1
Magmi - 0.7.20
Configurable Item processor - 1.3.7a
OS is Ubuntu running PHP 5.3.10 & MySQL 5.5.34
Some thoughts based on my experience:
Are any of your attributes mandatory? I suggest putting something into the size column for the configurable product.
For visibility, I use numeric values. For configurable products it should be 4, for simple products it should be 1 (you don't want them to be visible individually, rather you want them to be visible within the configurable products).
Finally, how are you creating your csv files? If just with excel, you may get problems with the encoding and how it separates fields. I run my csv files through openoffice calc to make the files UTF-8 and the text fields are properly handled.
Just ran into the exact same problem. Simples work, but configurables don't even with the exact same information/etc.
As it would turn out if there is a wrong "configurable_attributes" named attribute with a simple it still goes.
Example
_attribute_set,type,configurable_attributes,size_option,color,
"Default",simple,"color,size","Small","Gold"
No error
_attribute_set,type,configurable_attributes,size_option,color,
"Default",configurable,"color,size","Small","Gold"
Error
Turns out "color,size" was not matching to actual attribute names.
Was "color,size_option".
Check your attributes

Migrated database from sqlserver to mysql seems to have error in rails

I have completed migrating database from sqlserver to mysql but it seems to have problems in rails. When i fetch record in rails console for eg:
<AuthAdmin:0xb6e506f8 #attributes={"Status"=>"1", "LastUpdateSeqNo"=>nil, "CreationDate"=>"2005-08-03 22:53:57", "AuthAdminID"=>"8987", "PropertyID"=>nil, "Password"=>"trustbss", "LastUpdate"=>"2012-07-12 05:15:02", "UserType"=>"0", "LoginName"=>"dev"}
Now attributes such as CreationDate and AuthAdminID suppose to be date and integer are displaying as string.But when I do
AuthAdmin.find(:first).AuthAdminID.class output is Fixnum
You can check record given above,it shows string. Now when i do arithmatic operation in my views caught issue string can't be coerced into Fixnum.Explicitly changing everything to their own TYPE is a very bad idea.
Hope that explain my problem.
Look like no one come across this problem.

How does Rails build a MySQL statement?

I have the following code that run on heroku inside a controller that intermittently fails. It's a no-brainer that it should work to me, but I must be missing something.
#artist = Artist.find(params[:artist_id])
The parameters hash looks like this:
{"utf8"=>"������",
"authenticity_token"=>"XXXXXXXXXXXXXXX",
"password"=>"[FILTERED]",
"commit"=>"Download",
"action"=>"show",
"controller"=>"albums",
"artist_id"=>"62",
"id"=>"157"}
The error I get looks like this:
ActiveRecord::StatementInvalid: Mysql::Error: : SELECT `artists`.* FROM `artists` WHERE `artists`.`id` = ? LIMIT 1
notice the WHEREartists.id= ? part of the statement? It's trying to find an ID of QUESTION MARK. Meaning Rails is not passing in the params[:artist_id] which is obviously in the params hash. I'm at complete loss.
I get the same error on different pages trying to select the record in a similar fashion.
My environment: Cedar Stack on Heroku (this only happens on Heroku), Ruby 1.9.3, Rails 3.2.8, files being hosted on Amazon S3 (though I doubt it matters), using the mysql gem (not mysql2, which doesn't work at all), ClearDB MySQL database.
Here's the full trace.
Any help would be tremendously appreciated.
try sql?
If it's just this one statement, and it's causing production problems, can you omit the query generator just for now? In other words, for very short term, just write the SQL yourself. This will buy you a bit of time.
# All on one line:
Artist.find_by_sql
"SELECT `artists`.* FROM `artists`
WHERE `artists`.`id` = #{params[:artist_id].to_i} LIMIT 1"
ARel/MySQL explain?
Rails can help explain what MySQL is trying to do:
Artist.find(params[:artist_id]).explain
http://weblog.rubyonrails.org/2011/12/6/what-s-new-in-edge-rails-explain/
Perhaps you can discover some kind of difference between the queries that are succeeding vs. failing, such as how the explain uses indexes or optimizations.
mysql2 gem?
Can you try changing from the mysql gem to the mysql2 gem? What failure do you get when you switch to the mysql2 gem?
volatility?
Perhaps there's something else changing the params hash on the fly, so you see it when you print it, but it's changed by the time the query runs?
Try assigning the variable as soon as you receive the params:
artist_id = params[:artist_id]
... whatever code here...
#artist = Artist.find(artist_id)
not the params hash?
You wrote "Meaning Rails is not passing in the params[:artist_id] which is obviously in the params hash." I don't think that's the problem-- I expect that you're seeing this because Rails is using the "?" as a placeholder for a prepared statement.
To find out, run the commands suggested by #Mori and compare them; they should be the same.
Article.find(42).to_sql
Article.find(params[:artist_id]).to_sql
prepared statements?
Could be a prepared statement cache problem, when the query is actually executed.
Here's the code that is failing-- and there's a big fat warning.
begin
stmt.execute(*binds.map { |col, val| type_cast(val, col) })
rescue Mysql::Error => e
# Older versions of MySQL leave the prepared statement in a bad
# place when an error occurs. To support older mysql versions, we
# need to close the statement and delete the statement from the
# cache.
stmt.close
#statements.delete sql
raise e
end
Try configuring your database to turn off prepared statements, to see if that makes a difference.
In your ./config/database.yml file:
production:
adapter: mysql
prepared_statements: false
...
bugs with prepared statements?
There may be a problem with Rails ignoring this setting. If you want to know a lot more about it, see this discussion and bug fix by Jeremey Cole and Aaron: https://github.com/rails/rails/pull/7042
Heroku may ignore the setting. Here's a way you can try overriding Heroku by patching the prepared_statements setup: https://github.com/rails/rails/issues/5297
remove the query cache?
Try removing the ActiveRecord QueryCache to see if that makes a difference:
config.middleware.delete ActiveRecord::QueryCache
http://edgeguides.rubyonrails.org/configuring.html#configuring-middle
try postgres?
If you can try Postgres, that could clear it up too. That may not be a long term solution for you, but it would isolate the problem to MySQL.
The MySQL statement is obviously wrong, but the Ruby code you mentioned would not produce it. Something is wrong here, either you use a different Ruby code (maybe one from a before_filter) or pass a different parameter (like params[:artist_id] = "?"). Looks like you use nested resources, something like Artist has_many :albums. Maybe the #artist variable is not initialized correctly in the previous action, so that params[:artist_id] has not the right value?

MySql to PostgreSql migration

My PostgreSQL is installed on Windows. How can I migrate data from MySQL database to PostgreSQL?
I've read tons of aricles. Nothing helps :(
Thanks.
My actions:
mysql dump:
mysqldump -h 192.168.0.222 --port 3307 -u root -p --compatible=postgresql synchronizer > c:\dump.sql
create db synchronizer at pgsql
import dump:
psql -h 192.168.0.100 -d synchronizer -U postgres -f C:\dump.sql
output:
psql:C:/dump.sql:17: NOTICE: table "Db_audit" does not exist, skipping
DROP TABLE
psql:C:/dump.sql:30: ERROR: syntax error at or near "("
СТРОКА 2: "id" int(11) NOT NULL,
^
psql:C:/dump.sql:37: ERROR: syntax error at or near ""Db_audit""
СТРОКА 1:LOCK TABLES "Db_audit" WRITE;
^
psql:C:/dump.sql:39: ERROR: relation "Db_audit" does not exist
СТРОКА 1:INSERT INTO "Db_audit" VALUES (4068,4036,4,1,32,'2010-02-04 ...
^
psql:C:/dump.sql:40: ERROR: relation "Db_audit" does not exist
СТРОКА 1:INSERT INTO "Db_audit" VALUES (19730,2673,2,2,44,'2010-11-23...
^
psql:C:/dump.sql:42: ERROR: syntax error at or near "UNLOCK"
СТРОКА 1:UNLOCK TABLES;
^
psql:C:/dump.sql:48: NOTICE: table "ZHNVLS" does not exist, skipping
DROP TABLE
psql:C:/dump.sql:68: ERROR: syntax error at or near "("
СТРОКА 2: "id" int(10) unsigned NOT NULL,
^
psql:C:/dump.sql:75: ERROR: syntax error at or near ""ZHNVLS""
СТРОКА 1:LOCK TABLES "ZHNVLS" WRITE;
^
psql:C:/dump.sql:77: WARNING: nonstandard use of escape in a string literal
СТРОКА 1:...???????? ??? ???????','10','4607064820115','0','','??????-??...
^
ПОДСКАЗКА: Use the escape string syntax for escapes, e.g., E'\r\n'.
Cancel request sent
psql:C:/dump.sql:77: WARNING: nonstandard use of escape in a string literal
СТРОКА 1:...??????????? ????????','10','4602784001189','0','','???????? ...
My experience with MySQL -> Postgresql migration wasn't really pleasant, so I'd have to second Daniel's suggestion about CSV files.
In my case, I recreated the schema by hands and then imported all tables, one-by-one, using mysqldump and pg_restore.
So, while this dump/restore may work for the data, you are most likely out of luck with schema. I haven't tried any commercial solutions, so see what other people say and... good luck!
UPDATE: I looked at the code the process left behind and here is how I actually did it.
I had a little different schema in my PostgreSQL db, so some tables were joined, some were split. This is why straightforward import was not an option and my case is probably more complex than what you describe and this solution may be an overkill.
For each table in PG database I wrote a query that selects the relevant data from MySQL database. In case the table is basically the same in both databases, and there are no joins it can be as simple as this
select * from mysql_table_name
Then I exported results of this query to XML, to do this you need to run it like this:
echo "select * from mysql_table_name" | mysql [CONNECTION PARAMETERS] -X --default-character-set=utf8 > mysql_table_name.xml
This will create a simple XML file with the following structure:
<resultset statement="select * from mysql_table_name">
<row>
<field name="some_field">field_value</field>
...
</row>
...
</resultset>
Then, I wrote a script, that produces INSERT statement for each row element in this XML file. The name of the table, where to insert the data was given as a command line parameter to this script. Python script, in case you need it.
These sql statements were written to a file, and then fed to psql like this:
psql [CONNECTION PARAMETERS] -f FILENAME -1
The only trick there was in XML -> SQL transformation is to recognize numbers, and unquote them.
To sum it up: mysql can produce query results as XML and you can use it.
It's a bit more complicated than that. There is plenty of documentation here:
http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL
There, you'll also find conversion scripts.
In my rather simple case (30 tables, 10000 records), I used a perl script:
http://pgfoundry.org/frs/?group_id=1000198
It chugged through the mysql dump file and produced a pg dump file, with the following issues.
I was importing to Heroku so I used their pgbackups plugin which worked almost flawlessly.
Issues to watch for
Boolean data types. MySQL stores these as 0 and 1. PostGreSQL stores them as t and f. Watch that the booleans dont get migrated as integers.
Auto incrementing IDs. You may find your ids start counting again from 1. You'll get errors like this: "duplicate key value violates unique constraint ...". It's easy to fix, but watch out for it.
I've used py-mysql2pgsql for converting a big MySQL database into Postgres. It handles most cases very well. I had to patch it for couple of cases specific to my needs though.
https://pypi.python.org/pypi/py-mysql2pgsql
By default, it reads data from MySQL and writes to Postgres. But you can ask it to write the schema and/or data to a file for inspecting before loading into Postgres.
You can use https://github.com/mihailShumilov/mysql2postgresql
This is wroted on PHP convertor
There's also a very nice (fork of a) python converter that is maintained by the gitlab creators:
https://github.com/gitlabhq/mysql-postgresql-converter
The original fork is for this project is stale. For me, everything worked perfectly using this script.
Here there is a project which migrates in couple commands your current MySQL database to Postgresql including indexes, and foreign keys. Also it allows to define name, indexes and column type parsings so you can overwrite default behavior.
https://github.com/ggarri/mysql2psql
I hope it could be useful for anyone of you who is interested in migrating his current project to PG, in our case we obtained around 20% performance increase.
It is much better to use some program, that automates the process of migration.
Even if you familiar with all gotchas, doing every step by hand may take a lot of time, especially when your db is "big".
Try FromMySqlToPostgreSql.
This tool is feature-reach and easy to use.
It maps data-types, migrates constraints, indexes, PKs and FKs exactly as they were in your MySQL db.
Under the hood it uses PostgreSQL COPY, so data transfer is very fast.