Sorry but I am new in solr but I am stuck in this.
First what I am doing: I have use Tomcat server to install my solr.
What I want to do: I want to import mysql data to solr.
I have been searching for hours but could not find a proper solution to it have seen many question but to nearest to it was this Question. but it was no help cause it have a different error.
This is my command:
http://localhost:8080/solr/db/dataimport?command=full-import
This is my error:
Error:HTTP Status 404 - /solr/db/dataimport
type: Status report
message: /solr/db/dataimport
description The requested resource is not available.
Sorry I am new to solr any advice will be very helpful.
This line is from the example that ships with Solr. If you just found it on the internet, see the README.txt file in the example/example-DIH directory of the distribution.
If you are trying to configure your own collection, you need to replace the db part with your collection name and you need to configure DIH in solrconfig.xml (libraries and end-point) and you need to configure the import definition to use your own database.
Related
I have a custom log file in a JSON format, the app we are using will output an 1 entry per file as follows
{"cuid":1,"Machine":"001","cuSize":0,"starttime":"2017-03-19T15:06:48.3402437+00:00","endtime":"2017-03-19T15:07:13.3402437+00:00","rejectcount":47,"fitcount":895,"unfitcount":58,"totalcount":1000,"processedcount":953}
I am trying to ingest this into ElasticSearch. I believe this is possible as I am using ES5.X
I have configured my FileBeat prospector, I have attempted to at least pull out 1 field from the file for now, namely the Cuid
filebeat.prospectors:
input_type: log
json.keys_under_root : true
paths:
C:\Files\output*-Account-*
tags : ["json"]
output.elasticsearch:
# The Logstash hosts
hosts: ["10.1.0.4:9200"]
template.name: "filebeat"
template.path: "filebeat.template.json"
template.overwrite: true
processors:
- decode_json_fields:
fields: ["cuid"]
When I start the FileBeat , it seems to harvest the files, As I get an entry in the FileBeat Registry files
2017-03-20T13:21:08Z INFO Harvester started for file:
C:\Files\output\001-Account-20032017105923.json
2017-03-20T13:21:27Z INFO Non-zero metrics in the last 30s: filebeat.harvester.closed=160 publish.events=320 filebeat.harvester.started=160 registrar.states.update=320 registrar.writes=2
However, I can't seem to find the data within Kibana. I am not entirely sure how to find it?
I have ensured the FileBeat templates are loaded in kibana.
I have tried to read the documentation and I think I understand it correctly but I am still very hazy, as I am totally new to the stack.
I am still not entirely sure if this is the right answer. However I managed to resolve my particular issue. In that we were writing out Multiple JSON files to the directory, all with just single line in, as detailed above. Although FileBeats, appeared to harvest the files, I don't think it was reading them.
I modified the application to make use of log4Net, and implement RollingFileAppender, I then ran the application, which started emiting logs to the directory and if by magic, without modifying the my Filebeat.yml it all just started working.
I can only conclude that Filebeats, does not handle multi one line json files. Unless there is some other configuration I am unaware of.
I've just move from 5.2.1 to 5.2.2 (own bug fixed)
Before I migrated, I've export all queries/ searches to json file in order to upload it to the new Kibana version
At first, I've update the ES version and to make sure all works, I reopen Kibana 5.2.1 and import json file. All good :)
Afterward, updating to Kibana 5.2.2.
When I open it all searches, visualize and dashboard were in. Is this the proper and straightforward way to copy my data when updating version?
Or maybe to use like in this question?
Tnx
Ok I got it and it's quite simple :)
when creating queries/ visualize in kibana, it saved it to .
kibana default index pattern (in config file) in ES. So, when updating Kibana's version and reading from the same ES version, data will appear in the UI.
In case user wish to save it to other index pattern, he should change it in config file.
For more reading see here
Does anyone know if MySQL database schema changed from Jira version 6.x to 7.x?
I asked Atlassian support, but had no definitive answer. They proposed to install clean version of Jira 7 and compare tables with version 6.
Well, since no one replied, I've downloaded and installed Jira7 and compaired it with Jira6. Here are my observations (in my environment).
--Tables counts:
jira6 - 267
jira7 - 239
-- bunch of AO.* tables were removed (this is where your count could be different)
-- these tables were added
board
boardproject
deadletter
tempattachmentsmonitor
-- these tables have added/changed index
cwd_user
jiraaction
jiraissue
In all, mostly, what I saw is addition of CHARSET=utf8 COLLATE=utf8_bin in every "create table" statement
I have asked the same thing , and they dont have one available.
Database schemas are available in PDF format here:
https://developer.atlassian.com/jiradev/jira-platform/jira-architecture/database-schema
Atlassian says -
The PDFs below show the database schema for JIRA 6.1 EAP 3 (m03) and JIRA 5.1.2.
jira70_schema.pdf
JIRA61_db_schema.pdf
JIRA_512_DBSchema.pdf
The database schema is also described in WEB-INF/classes/entitydefs/entitymodel.xml in the JIRA web application. The entitymodel.xml file has an XML definition of all JIRA's database tables, table columns and their data type. Some of the relationships between tables also appear in the file.
Generating JIRA database schema information
To generate schema information for the JIRA database, e.g. the PDF above, follow the instructions below. You can generate schema information in pdf, txt and dot formats. Note, if you want to generate the schema in PDF format, you need to have Graphviz installed.
Download the attached plugin:
For JIRA 5: jira-schema-diagram-generator-plugin-1.0.jar
For JIRA 6: jira-schema-diagram-generator-plugin-1.0.1.jar
For JIRA 7: jira-schema-diagram-generator-plugin-1.1.0.jar
Install the plugin in your JIRA instance by following the instructions on Managing JIRA's Plugins.
Go to the JIRA administration console and navigate to System > Troubleshooting and Support > Generate Schema Diagram
(tick)Keyboard shortcut: g + g + start typing generate
Enter the tables/columns to omit from the generated schema information, if desired.
If you want to generate a pdf, enter the path to the Graphviz executable.
Click Generate Schema.
The 'Database Schema' page will be displayed with links to the schema file in txt, dot and pdf format.
(You could probably get the XML or txt file and compare it in a file compare program to get specific changes.)
I am trying to find the best way to import all of our Lighthouse data (which I exported as JSON) into JIRA, which wants a CSV file.
I have a main folder containing many subdirectories, JSON files and attachments. The total size is around 50MB. JIRA allows importing CSV data so I was thinking of trying to convert the JSON data to CSV, but all convertors I have seen online will only do a file, rather than parsing recursively through an entire folder structure, nicely creating the CSV equivalent which can then be imported into JIRA.
Does anybody have any experience of doing this, or any recommendations?
Thanks, Jon
The JIRA CSV importer assumes a denormalized view of each issue, with all the fields available in one line per issue. I think the quickest way would be to write a small Python script to read the JSON and emit the minimum CSV. That should get you issues and comments. Keep track of which Lighthouse ID corresponds to each new issue key. Then write another script to add things like attachments using the JIRA SOAP API. For JIRA 5.0 the REST API is a better choice.
We just went through a Lighthouse to JIRA migration and ran into this. The best thing to do is in your script, start at the top-level export directory and loop through each ticket.json file. You can then build a master CSV or JSON file to import into JIRA that contains all tickets.
In Ruby (which is what we used), it would look something like this:
Dir.glob("path/to/lighthouse_export/tickets/*/ticket.json") do |ticket|
JSON.parse(File.open(ticket).read).each do |data|
# access ticket data and add it to a CSV
end
end
I need to update my solr documents with detailed informations i can grab from a mysql database.
Example:
solr field "city" --> "London" (read from an xml source with post.jar tool)
on update time (requestHandler /update already configured with custom plugin to do other stuff) solr should query mysql for more information about "London" (or whatever just read)
solr updates the fields of that document with the query result
i've been trying with a JDBC plugin and with a DIH handler (which i can only use calling /dataimport/ full-import... and i can't in my specific case) and so far no success :(
Any of you had the same problem? How did you solve it? Thanks!
edit: i forgot, for the dih configuration i tried following this guide http://www.cabotsolutions.com/2009/05/using-solr-lucene-for-full-text-search-with-mysql-db/
Please do include the full output of /dataimport/full-import when you access it in your browser. Solr error messages can get cryptic.
Have you considered uploading documents by XML? http://wiki.apache.org/solr/UpdateXmlMessages . Its more powerful, allowing you to use your own logic when uploading documents.
Read each row from SQL and compose an XML document (string) with each document under tags.
Post the entire XML string to /update . Dont forget to set the MIMEtype header as text/xml . And make sure to set your Servler container's (Tomcat, Jetty) upload limit on POSTs (Tomcat has 2mb limit, if I recall right)
dont forget the commit and optimize commands