Can anyone suggest any software to export Abs (Absolute database) to MySQL? I have already tried: http://www.componentace.com/bde_replacement_database_delphi_absolute_database.htm; which returns corrupted data ( trying to decode data (exporting Abs to MySQL) ) and ABC Amber Absolute Converter 1.03 which was unable to handle data (900 MB). Can anyone suggest alternatives? The database contains only one table (entries) and has one column in it WIDEMEMO. I am trying to export data to MySQL as stated above.
Have you thought about writing your own? If you've got one table with just one column, this isn't a big programming project.
Just open the table, loop through the records, writing them to temporary intermediate file. Then write another program to read them in to MySQL.
But, I agree with Radu: if you're in good standing with the Absolute people, they should be able to help you. Maybe they, like me, can't figure why you just wouldn't write a quick and dirty program to do this.
Sorry if I've overlooked something that makes my suggestion unreasonable.
As I've answered in your other question, have you tried to contact the vendor(Absolute database producer) and ask some advice from it?
Related
Hello everybody I'm new here as well as with MYSQL/SQL so I wonder if you could help me.
I have a *.INF file around 340 kb which contains data from a lab device (kinda recorder). I want to work with it using SQL, however I dunno if it is possible to import data straight into mysql DB. I can edit it with HEX editors etc... but I wonder if I could place the data directly into MYSQL. What I need specifically is every row in the table should be in binary format, exactly 512 bytes and it should automatically create so many rows until all data will be filled in. Is it possible or not, if yes what parameter should I use while creating the table. And how could I import *.INF file after.
Thank you all in advance! Cheers :)
P.S. Sry for the bad ENG :)
P.S. 2. I would be much appreciated if you could share the needed SQL script/code.
I am hoping someone can point me in the right direction, in relation to the scenario I am faced with.
Essentially, I am given a csv each day containing payment information of 200+ lines
As the Payment reference is input by the user at source, this isn't always in the format I need.
The process is currently done manually, and can take considerable time, therefore I was hoping to come up with a batch file to isolate the reference I require, based on a set of parameters.
Each reference should be; 11 digits in length, be numeric only and start either 1,2 or 3.
I have attached a basic example with this post.
It may be that this isn't possible in batch, but any ideas would be appreciated.
Thanks in advance :-)
I'm not too sure about batch but Python and Regexcan help you out here.
Here is a great tutorial on using csv's with python.
Once you have that down, you could use Regex to filter out the correct values.
Here is the correct expression to help you out ^[1|2|3][0-9]{10}$
I did some research on this and couldn't find many introductory resources for a beginner so I'm looking for a basic understanding here of how the process works. The problem I'm trying to solve is as such: I want to move data from an old database to a new one with a slightly different structure, possibly mutating the data a little bit in the process. Without going into the nitty gritty detail.. what are the general steps involved in doing this?
From what I gathered I would either be...
writing a ton of SQL queries manually (eesh)
using some complex tool that may be overkill for what I'm doing
There is a lot of data in the database so writing INSERT queries from a SQL dump seems like a nightmare. What I was looking for is some way to write a simple program that inserts logic like for each row in the table "posts", take the value of the "body" attribute and put it in the "post-body attribute of the new database or something like that. I'm also looking for functionality like append a 0 to the data in the "user id" column then insert it in the new database (just an example, the point is to mutate the data slightly).
In my head I can easily construct the logic of how the migration would go very easily (definitely not rocket science here).. but I'm not sure how to make this happen on a computer to iterate over the ridiculous amount of data without doing it manually. What is the general process for doing this, and what tools might a beginner want to use? Is this even a good idea for someone who has never done it before?
Edit: by request, here is an example of a mutation I'd like to perform:
Old database: table "posts" with an attribute post_body that is a varchar 255.
New database: table "posts" with an attribute body" that is a text datatype.
Want to take post-body from the old one and put it in body in the new one. Realize they are different datatypes but they are both technically strings and should be fine to convert, right? Etc. a bunch of manipulations like this.
Usually, the most time-consuming step of a database conversion is understanding both the old and the new structure, and establishing the correspondance of fields in each structure.
Compared to that, the time it takes to write the corresponding SQL query is ridiculously short.
for each row in the table "posts", take the value of the "body" attribute and put it in the "post-body attribute of the new database
INSERT INTO newdb.postattribute (id, attribute, value)
SELECT postid, 'post-body', body FROM olddb.post;
In fact, the tool that allows such data manipulation is... SQL! Really, this is already a very high-level language.
Good people,
I have observed that the MS Access ORDER BY clause sorts records in non-ASCII way. This is different from MySQL - which is generally ASCII-compliant. Let me give you a little background so you understand why this is a problem to me.
Back in 2010, I wrote a generic database transaction logger. The goal was to detect changes occurring on (theoretically) any SQL database and log them in another database. To do this, I use a shadow MySQL database where I maintain a copy of the entire source database. The shadow database is designed using the EAV model so that it is agnostic to the source database schema.
Every once in a while, I read out of both the source and shadow databases, order the records based on their primary keys and format the records to correspond one-to-one. Then, I do a full database compare using a merge algorithm.
This solution has worked okay until last week when a user set it up against an Access database with string primary keys which are not always alphanumeric. All of a sudden the software started logging ghost transactions that have not happened on the source database.
On closer examination, I found out that MS Access orders non-alphanumeric characters in a fashion different from MySQL. As such, my merge algorithm, which assumes similar sort order for both source and shadow records, started to fail.
Now, I have figured out a way I could tweak my software to "cure" such primary keys before using them but it would help a great deal if I know precisely what is the nature of MS Access' ordering scheme. Any ideas will be highly appreciated.
PS: Let me know if there's anything I need to clarify. I am trying to avoid typing too much of what may not be useful.
I had a difficult time with this a few years ago. I'm sorry I didn't retain the solution, but it used VBA and it was not concise or elegant.
I opened the tables as DAO recordsets, advanced through the records, and used the strcomp() function to compare keys. I experimented a lot with the binary/text option of strcomp() and I believe it was finally necessary to insert an error-handling component!
This discussion may be relevant. Also this and this.
is there any way how to create something like data.frame object in R that would point to specific table in MySQL database and would behave like data.frame? I haven't been able to find any mention about it.
As an example, let us say, I have a table called customers and columns names, heights, weights and I would like some function, that would create variable customer and I could access the respective columns in data.frame-like way, i.e. customer$heights, etc.
My problem is that I am working with very large datasets and operating over database is much faster and one might actually hack some descriptive statistics in SQL to be used with such pointer variable, for example sum, average, etc...
Thanks for answer.
T.
Yes, external pointers can do that, and the RODBC package uses it. See the "Writing R Extensions" manual for an introduction to external pointers.
The ff, bigmemory and mmap package may give you ideas about how to make external data appear internal to R. It can be done, but it's not a quick hack for a rainy afternoon.
And in general, one is generally best off doing as much computation 'near the data' as possible. Were you using PostgreSQL, you could try the embedded Pl/R extension for it. To my knowledge, no such extension exists for MySQL.