I ask this question, because I don't even have a clue what to google for. I have a MariaDB database which I access through node.js' mysql module. I write my code in TS.
My problem is, that the database I try to access will collect millions of datasets over time, and querying it might take awhile. I want would want to find a way to parse the database and serve one dataset whenever it is found instead of querying the whole database first and then sending an accumulated result.
Do you have any clue how I can solve this or what to google / YouTube search for?
Related
I am working on school project where we need to create website and use Redis to search database, in my case it will be movie database. I got JSON file with names and ratings of 100 movies. I would like to upload this dataset into Redis instead on entering the entire dataset manually. JSON file is saved on my desktop and I am using Ubuntu 20.04.
Is there a way to do it?
I have never used Redis so my question might be very silly. I've been looking all over the internet and cannot find exactly what needs to be done. I might be googling incorrect question maybe that's why I cannot find the answer.
Any help would be appreciated.
Write an appropriate program to do the job. There's no one-size-fits-all process because how your data is structured in redis is up to you; once you decide on that, it should be easy to write a program to parse the JSON and insert the data.
I am using SSIS to move data between a local MSSQL server table to a remote MYSQL table (Data flow, OLEdb source and ODBC Destination). this works fine if im only moving 2 lines of data, but is very slow when using the table I want which has 5000 rows that fits into a csv of about 3mb, this currently takes about 3 minutes using ssis's options, however performing the steps below can be done in 5 seconds max).
I can export the data to a csv file copy it to the remote server then run a script to import straight to the DB, but this requires a lot more steps that I would like as I have multiple tables I wish to perform the steps on.
I have tried row by row and batch processing but both are very slow in comparison.
I know I can use the above steps but I like using the SSIS GUI and would have thought there was a better way of tackling this.
I have googled away multiple times but have not found anything that fits the bill so am calling on external opinions.
I understand SSIS has its limitations but I would hope there is is a better and faster way of achieving what I am trying to do. If SSIS is so bad I may as well just rewrite everything into a script and be done with it, But I like the look and feel of the Gui and would like to move my data in this nice friendly way of seeing things happen.
any suggestions or opinions would be appreciated.
thank you for your time.
As above have tried ssis options including a 3rd party option cozyroc but that sent some data with errors (delimiting on columns seemed off) now and again, different amount of rows being copied and enough problems to make me not trust the data.
I am new to mysql and databases in general. I've come across a problem that I can't seem to find a solution too. I'm not sure even how to word the scenario.
Basically I have a concatenated field that I need to take "explode" and make each value its of that field its own row. I'm pulling this data from a daily feed, so this needs to be automated as either a trigger, stored procedure, or scheduled job in Talend. Here is a simplified version of the situation. Please see the "current file format" and the "desired outcome" pics. Sorry- this is my first post here! Any help would be appreciated!
I am using a mysql database. Currently the file is coming in via csv export, but I will be connecting via Talend Open Studio to sync the data. See the links below for and example. Here is a sample.
A precise answer is difficult without seeing your code, and the sample link returns 404 for me. You should be able to do this within Talend. Try using Talend’s tExtractDelimitedFields or tExtractRegexFields commands to achieve the result you want.
I need daily load the result of a specific query to Redshift. I've already created a table on redshift that will support the results of this query but now i'm a little stuck since i can't find a good way to solve this.
So far i tried using python but im getting lots of headaches regarding line terminators in fields that basically store a description, and character encodings.
I know lots of programs that allow you to connect to a db and make querys also have an export option to csv but since i need to automatically do this everyday i don't think any of those would work for me.
Now i would like to know if maybe there are better suited options so i can start looking into them. Im not asking for a step by step how to but just for tools/programs/etc that i should start looking into.
You should look into MySQL's stored procedures and events -- using just MySQL, you can have it generate a file every day.
You can't dynamically rename the file, or overwrite it, though, so you'd need a second job which deletes the file -- this can be done with Python.
Whether you're running Windows or Linux, you should be able to schedule a batch file or python script to execute once a day, and that would be an alternate way to do that.
Does this address your question?
I have inherited a legacy Access app that has a TON of code running before the main form comes up. This code imports data, deletes data, and changes data a million ways.
Is there a way where (after the startup stuff is finished) I can list the tables and when each of them had data affected most recently?
Thanks!
Sorry, but I'm afraid the answer to your question is simply: No.
Here's an idea though:
Make a backup of the database file.
Open the app so it runs the code you are concerned about.
Compare the DB to the backup using a tool like Red Gate SQL Data compare.
BTW: I'm not sure if the RedGate tool works against access databases, but FMSInc. has one that claims to.