Multiple entry in one mysql cell and Json data - mysql

I have lots folder named as a some series name. And every series folder it has its own chapter folder. In chapter folder some images in it. My website is manga(comic) site. So I am gonna record this folder's and image's path to mysql and return to the Json data for using with AngularjS. So How should I save these folders path or names to mysql for the get proper Json data and using with angularjs.
My table is like this: Can change,
id series_name folder path
1 Dragon Ball 788 01.jpg02.jpg03.jpg04.jpg05.jpg06.jpg..........
2 One Piece 332 01.jpg...................
3 One PÄ°ece 333 01.jpg02.jpg...........
My current website:
Link to Reader Part of My WebSite

I'm assuming you're using PHP on a LAMP stack. So first you would need to grab all your SQL fields and change them into JSON keys.
Create JSON-object the correct way
Then you can create your JSON object like this and pass it to Angular when it does an AJAX request. Make sure you create the array before placing it into your JSON object (for path).
{
id: Number,
series_name: String,
folder: Number,
path: [
String, String, String, ...
]
}
Here is the Angular documentation for an Angular GET request.
https://docs.angularjs.org/api/ng/service/$http
EDIT:
It's difficult because of how your filenames are formatted. If it were formatted like "01.jpg,02.jpg,03.jpg" it would be easier.
You can use preg_split with the regex:
$string = "01.jpg02.jpg03.jpg04.jpg05.jpg06.jpg";
$keywords = preg_split("/(.jpg|.png|.bmp)/", $string);
but you would need them all to be the same extension, then you need to re-append the extension to each element after you split it.
There may be a better way.

Related

JSON Schema take values from another file (non-json), take file names

Is it possible to restrict values, or property names in the schema in accordance with data defined in another json (non-schema, just data file) file? Or even take files from a folder and process their names?
For example, YAML:
file 1:
Attributes:
- Attribute1
- Attribute2
file2:
Influence:
Attribute1: 1
Attribute2: -3
I want to have syntax help in the second file that depends on the data defined in the first file. How can I do it?
And harder case
there is a folder with some YAMLs/JSONs describe some events.
like:
Events/event1.yaml
Events/subfolder/event2.yaml
Another file should use only file names defined in the folder
For example:
DefaultEvents:
- event1
- event2
Is it possible and how to get autocomplete with JSON Schema in such a case?
It's not about validation, I need syntax help, autocomplete during making such files.
The only possibility I found is to add all possible values to JsonSchema dynamically with any programming language you use.
This solution will be sufficient when JsonSchema is stored locally in your project.

What does it mean when a KinematicBody2D is stored in a JSON file - Godot?

After writing a few files for saving in my JSON file in Godot. I saved the information in a variable called LData and it is working. LData looks like this:
{
"ingredients":[
"[KinematicBody2D:1370]"
],
"collected":[
{
"iname":"Pineapple",
"collected":true
},{
"iname":"Banana",
"collected":false
}
]
}
What does it mean when the file says KinematicBody2D:1370? I understand that it is saving the node in the file - or is it just saving a string? Is it saving the node's properties as well?
When I tried retrieving data - a variable that is assigned to the saved KinematicBody2D.
Code:
for ingredient in LData.ingredients:
print(ingredient.iname)
Error:
Invalid get index name 'iname' (on base: 'String')
I am assuming that the data is stored as a String and I need to put some code to get the exact node it saved. Using get_node is also throwing an error.
Code:
for ingredient in LData.ingredients:
print(get_node(ingredient).iname)
Error:
Invalid get index 'iname' (on base: 'null instance')
What information is it exactly storing when it says [KinematicBody2D:1370]? How do I access the variable iname and any other variables - variables that are assigned to the node when the game is loaded - and is not changed through the entire game?
[KinematicBody2D:1370] is just the string representation of a Node, which comes from Object.to_string:
Returns a String representing the object. If not overridden, defaults to "[ClassName:RID]".
If you truly want to serialize an entire Object, you could use Marshalls.variant_to_base64 and put that string in your json file. However, this will likely bloat your json file and contain much more information than you actually need to save a game. Do you really need to save an entire KinematicBody, or can you figure out the few properties that need to be saved (postion, type of object, ect.) and reconstruct the rest at runtime?
You can also save objects as Resources, which is more powerful and flexible than a JSON file, but tends to be better suited to game assets than save games. However, you could read the Resource docs and see if saving Resources seems like a more appropriate solution to you.

How to add entries to a JSON array/list

I'm trying to set up a Discord bot that only lets people on a list in a JSON file use it, I am wondering how to add data to the JSON array/list but I'm not sure how to move forward and I have had no real luck looking for answers elsewhere.
This is an example of how the JSON file looks:
{
IDs: [
"2359835092385",
"4634637576835",
"3454574836835"
]
}
Now, what I am looking to do, is add a new ID to "IDs" and not have it completely break, and I wish to be able to have other entries in the JSON file as well so i can make something like "AdminIDs" for people that can do more stuff to the bot.
Yes. I know I can do this stuff role based in guilds/servers, but I would like to be able to use the bot in DMs as well as on guilds/server.
What I want/need is a short and simple to manipulate script that I can easily put in to a new command so I can add new people to the bot without having to open and edit the JSON file manually.
If you haven't parsed your data already via the package json then you can do the following for parsing the data:
import json
json_code = { "..": ... }
parsed_json = json.dumps(json_code)
print(parsed_json['IDs'])
Then you can simply use this data like a normal list and append data to it.
All keys must be surrounded by a string
In this cause the key is the IDs while the value is the list and the value of the list would be the items inside it
import json
data={
"IDs":[
"2359835092385",
"4634637576835",
"3454574836835"
]
}
Let's say that your JSON data is from a file, to load it so that you can manipulate it do the following
raw_json_data=open('filename.json',encoding='utf-8')
j_data=json.load(raw_json_data) #Now j_data is basically the same as data except difference in name
print(j_data)
# >> {'IDs': ['2359835092385', '4634637576835', '3454574836835']}
To add things inside the list IDs you use the append method
data['IDs'].append('adding something') #or j_data['IDs'].append("SOMEthing")
print(data)
# >> {'IDs': ['2359835092385', '4634637576835', '3454574836835', 'adding something']}
To add a new key
data['Names']=['Jack','Nick','Alice','Nancy']
print(data)
# >> {'IDs': ['2359835092385', '4634637576835', '3454574836835', 'adding something'], 'Names': ['Jack', 'Nick', 'Alice', 'Nancy']}

Add element in JSON without key identifier - FIREBASE

This is the structure of my firebase database and the json file that I used to create that firebase structure.
"Menu" it's a list of ingredients, divided for categories, like "Pane" (Bread in english). "Pane" that has a field "lista" that have alle the type of breads (each type of bread has 3 fields: "attivo", "nome", "prezzo").
I need to ask the user to add a new ingredient, so to add a new element that has the fields "attivo", "nome", "prezzo".
The problem is that my sub ingredients (types of Breads) don't have an identifier, so I don't know how to add a children to "Lista" without an id key.
With this code (used in my typescript file) all the "lista" field it's replaces by the new element, and I can't use .Child() because I don't have an identifier to pass:
firebase.database().ref('/menu/pane/lista').set({
nome: data['nome'],
prezzo: data['prezzo'],
attivo: false,
});
Is it possible to add an element without have an identifier and have a situation like my json file?
EDIT
If I use set or push, my json structure change and I don't want that:
Thank you in advance.
Solved
I've solved the problem with a workaround. I retrieve the array that populate "lista" and push the new ingredient like in a normal array, than I push into "lista" on firebase the array with the new element. In this way all "lista" content it's replaced by the array that I push, but the array has the new element so it works!
Yes it would be possible to save the values into an array that you would store under the lista node, but this would create some "extra complexity". See this Firebase Blog post for more details: https://firebase.googleblog.com/2014/04/best-practices-arrays-in-firebase.html
The recommended way to add some data to a list without having a (natural) uid is to use the push method:
https://firebase.google.com/docs/reference/js/firebase.database.Reference#push.
So, you should do as follows and Firebase will automatically generate a unique id for your new record:
firebase.database().ref('/menu/pane/lista').push({
nome: data['nome'],
prezzo: data['prezzo'],
attivo: false
});
If you don't want the identifiers of your pane nodes to be auto-generated (as alphanumeric value like "-LStoAsjJ....") you would need to generate them yourself. But then you would have to use a transaction to generate this sequence, and this will add some complexity too. It is probably better to use push() and re-engineer your front end code in such a way you can deal with the alphanumeric uids generated by Firebase.

Index JSON filename along with JSON content in Solr

I have 2 directories: 1 with txt files and the other with corresponding JSON (metadata) files (around 90000 of each). There is one JSON file for each CSV file, and they share the same name (they don't share any other fields). I am trying to index all these files in Apache solr.
The txt files just have plain text, I mapped each line to a field call 'sentence' and included the file name as a field using the data import handler. No problems here.
The JSON file has metadata: 3 tags: a URL, author and title (for the content in the corresponding txt file).
When I index the JSON file (I just used the _default schema, and posted the fields to the schema, as explained in the official solr tutorial), I don't know how to get the file name into the index as a field. As far as i know, that's no way to use the Data import handler for JSON files. I've read that I can pass a literal through the bin/post tool, but again, as far as I understand, I can't pass in the file name dynamically as a literal.
I NEED to get the file name, it is the only way in which I can associate the metadata with each sentence in the txt files in my downstream Python code.
So if anybody has a suggestion about how I should index the JSON file name along with the JSON content (or even some workaround), I'd be eternally grateful.
As #MatsLindh mentioned in the comments, I used Pysolr to do the indexing and get the filename. It's pretty basic, but I thought I'd post what I did as Pysolr doesn't have much documentation.
So, here's how you use Pysolr to index multiple JSON files, while also indexing the file name of the files. This method can be used if you have your files and your metadata files with the same filename (but different extensions), and you want to link them together somehow, like in my case.
Open a connection to your Solr instance using the pysolr.Solr command.
Loop through the directory containing your files, and get the filename of each file using os.path.basename and store it in a variable (after removing the extension, if necessary).
Read the file's JSON content into another variable.
Pysolr expects whatever is to be indexed to be stored in a list of dictionaries where each dictionary corresponds to one record.
Store all the fields you want to index in a dictionary (solr_content in my code below) while making sure the keys match the field names in your managed-schema file.
Append the dictionary created in each iteration to a list (list_for_solr in my code).
Outside the loop, use the solr.add command to send your list of dictionaries to be indexed in Solr.
That's all there is to it! Here's the code.
solr = pysolr.Solr('http://localhost:8983/solr/collection_name')
folderpath = directory-where-the-files-are-present
list_for_solr = []
for filepath in iglob(os.path.join(folderpath, '*.meta')):
with open(filepath, 'r') as file:
filename = os.path.basename(filepath)
# filename is xxxx.yyyy.meta
filename_without_extension = '.'.join(filename.split('.')[:2])
content = json.load(file)
solr_content = {}
solr_content['authors'] = content['authors']
solr_content['title'] = content['title']
solr_content['url'] = content['url']
solr_content['filename'] = filename_without_extension
list_for_solr.append(solr_content)
solr.add(list_for_solr)