I've been building a custom module for sugarCRM and i'm running into some issue's, when installing the module i'm met with 'Database failure. Please refer to sugarcrm.log for details.'
Upon checking the log file, i can find the error is this:
"MySQL error 1118: Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
01/03/14"
Whilst my module does have alot of fields, is there anyway i could get around this? Seems like sugar doesn't give me options for varchar/text etc when creating fields.
Thanks!
I ran in to this same problem when implementing SugarCRM as a multi-tenant solution. You have a couple of options.
1) Go in to studio and set the size of your fields to a smaller value. Each character in a varchar field is a few bytes in size on average. Therefore, if you reduce the amount of characters allowed for each of your fields in studio you will allow for more fields in your module. (see http://dev.mysql.com/doc/refman/5.0/en/column-count-limit.html).
2) Divide up those fields into a couple of modules that relate to a parent module. This will spread out your fields over more than 1 table preventing you from hitting the limit.
I would be happy to assist further if you need a more in-depth explanation of either solution.
Related
I am trying to import a file with order numbers that have values in excess of 9 billion into Access.
I have already figured out that I needed to switch from Access 2013 to Access 2016 so that I could enable "Support Bigint Data Type for Linked/Imported Tables". This enabled the Large Number data type in the Access Database.
At this point, I can manually enter Large Numbers into Access, and everything works okay, but when I try to import a .csv with Large Numbers, I get data conversion errors for the Large Numbers (and only the Large Numbers).
In the past, I encountered the issue where text columns starting with what appeared to be a number in the first row encountered conversion errors for actual text values, so I tried putting the Large Number order numbers on the top of the file, and that didn't work.
Can anyone confirm that the Access 2016 Import Wizard can't import Large Number values?
If not, has anyone successfully used import wizard to get a Large Number into Access? How did you achieve that?
Actually, you can use a 20 year old version of access and it can handle numbers with 28 digits.
And if you can enter such numbers manually, the the data is messed up, has extra junk, or extra characters. So no, upgrading will not by magic fix bad formatted data.
I would consider perhaps importing the data, and using say a text column for that data, and then perhaps then see if you can remove junk.
but, if you want to try a larger column, then try currency. and if you want REALLY HUGE WHOPPER then you can use this:
As you can see in above, that is 18 digits long!!!!
So, if the import is failing with above, then attempting to use a MUCH smaller big int will not help you at all here - not at all.
So, try above - see if that works.
However, the fact that you can type in the larger values by hand means that the issue here is messy data - and not that the numbers are too large.
You may well have to settle for importing that column as text, and then running some update query, or even some VBA code to clean out or clear out bad junk and characters or even perhaps extra spaces etc.
So, you don't need to change access to get and allow you to work with larger numbers. The "main" goal and idea behind that larger number is really for better SQL server support, and it not really much if at all a new feature designed to allow you to work with larger numbers (that's more of a by-product of the first goal).
I think people are safer importing using a text file. Access' import specification just seems to fail. I can't rely on it for production systems - the comment by James Marshall above is correct. you can tell Access to import as text, but before it does, if it sees a phone number, say, it will try to convert to a long int first and then just not import numbers exceeding memory, like phone numbers with area codes > 212
When executing the batch script to replicate my data from Exact Online, I get the following error:
Error itgencun016: Exclamation itgendch033: Backing databases require Invantive Data Replicator to restrict the number of columns to 1,000 for 'ExactOnlineXML.XML.SubscriptionLines'.
It occurs for the following query:
select /*+ ods(true, interval '20 hours') */ count(*)
from ExactOnlineXML.XML.SubscriptionLines
Same goes for ExactOnlineXML.XML.InvoiceLines.
How can I replicate these tables without maxing out the columns.
Invantive SQL places no restrictions on the length of column names nor on the number of columns for a table or view.
However, traditional databases were designed in another time and typically are restricted to for instance 30..128 characters column names and 1.000, 1.024 or few thousand columns. Remember that Oracle ran on 64 KB (32 K code, 32 K data); that is approximately the size of this question and answer :-)
When replicating data from Exact Online into a traditional database like Oracle, SQL Server or PostgreSQL, Invantive Data Hub will use Invantive SQL to retrieve the data from in this case Exact Online and then bulk load it into the database.
However, the data must fit both in column names as well as column number.
That is the main reason that the column names are so weird; they are fit into a limited number of characters independent of the original column names. The column names of the generated views are also shortened by removing center pieces with a unique MD5 hash.
For number of columns, Data Hub just rigorously checks that your source doesn't have more than the limit of 1.000 columns. The Exact Online XML APIs have no documentation that describes what columns can be filled with a value; just a XSD that describes all theoretical possibilities, leading to millions of columns.
Most Exact Online XML-based tables have been tuned to exclude column name paths that do not have values, but still they often don't fit within 1.000 columns.
The possible solutions are:
Use the Exact Online REST API variant, which is often present and sometimes also similar in functionality and performance (not always, the XML API is old, but in general better designed for usability). So check whether there is a ExactonlineREST..SubscriptionLines.
Exhaustively describe which columns to replicate.
Describe Columns to Replicate
The last solution is a little complex. It also defies use of advanced strategies like trickle loading (with web hooks) or smart sampling; it is just a plain copy with or without versioning.
As a sample I've run a query on 200 Exact companies with subscriptions, when connected to a Data Replicator environment:
Note that the /*+ ods(true) */ is not present but assumed implicitly; it is default to replicate when also connected to Data Replicator.
By adding /*+ ods(false) */, you effectively tell the SQL engine to not send the data for replication into a database to the Data Replicator provider.
When I run it, there is another error itgenugs026 (requested number of columns exceeds the maximum number supported for display in the results grid):
This is actually a rendering error; the grid used in the Query Tool restricts itself to 1.000 columns. Larger volumes of columns cause very slow UI response time.
By clicking the button 'Hide empty columns' or using Invantive Data Hub as user interface, you can get the actual results:
Note the tooltip: the heading display somewhat natural labels, but the actual column name is displayed in the tooltip.
Write down the column names that you need and fill an in-memory table with only the columns you need, such as:
create or replace table my_subscriptions#inmemorystorage
as
select /*+ ods(false) */
subscription_number_attr
, subscription_description
from exactonlinexml..subscriptionlines
Now replicate this table the normal way:
select /*+ ods(true, interval '1 second') */
count(*) some_unneeded_data_to_force_replication
from my_subscriptions#inmemorystorage
Note that the ODS hint must be present. In-memory tables are never replicated by default.
For refresh you can use alter persistent cache [force] refresh, but the in-memory table must have been filled in advance.
The resulting entry in the repository will be:
The facts table (see dcs_.... for the Data Vault with time travel) is:
And the default named view is imy_my_subscriptions_r (imy is abbreviation of 'inmemorystorage' driver):
I'm actually building a database on phpMyAdmin and I'm asking my self if something is possible and how could I implement it?
The fact is that I'm building lists through a website and then saving it inside of my database, but this list is only composed of items I already have stored in my database on another table.
I thought than a column with a SET datatype and all the selected items would be a memory gain and a clarity improvement instead of creating x lines linked to the created list by an ID column.
So, the question I'm asking is, can I create this kind of set for a column, which will update by it-self when I'll add items in the other table ? If yes, can I do it through phpMyAdmin interface or do I have to work on the MySQL server itself.
Finally, it won't be possible to use the SET datatype in my application because it can only store up to 64 items and I'll be manipulating around a thousand.
I'm still interested if any of you guys have an idea on how to do it because a table with x times(ID,wordID#) (see my situation, explained a bit higher in this post, on the answers part) doesn't seem really optimized and a light-weighted option.
Have a nice day :)
It is possible to simulate a SET in a BLOB (max 64K bits) or MEDIUMBLOB (max 16M bits), but it takes a bit of code -- find the byte, modify it using & or |, stuff it back in.
Before MySQL 8.0 bitwise operations (eg ANDing two SETs, etc) was limited to 64 bits. With 8.0, BLOBs can be operated on that way.
If your SETs tend to be sparse, then a list of bit numbers (in a commalist or in a table) may be more compact. However, "dictionary" implies to me that your SETs are likely to be somewhat dense.
If you are doing some other types of operations, clue us in.
I would love to hear some opinions or thoughts on a mysql database design.
Basically, I have a tomcat server which recieves different types of data from about 1000 systems out in the field. Each of these systems are unique, and will be reporting unique data.
The data sent can be categorized as frequent, and unfrequent data. The unfrequent data is only sent about once a day and doesn't change much - it is basically just configuration based data.
Frequent data, is sent every 2-3 minutes while the system is turned on. And represents the current state of the system.
This data needs to be databased for each system, and be accessible at any given time from a php page. Essentially for any system in the field, a PHP page needs to be able to access all the data on that client system and display it. In other words, the database needs to show the state of the system.
The information itself is all text-based, and there is a lot of it. The config data (that doesn't change much) is key-value pairs and there is currently about 100 of them.
My idea for the design was to have 100+ columns, and 1 row for each system to hold the config data. But I am worried about having that many columns, mainly because it isn't too future proof if I need to add columns in the future. I am also worried about insert speed if I do it that way. This might blow out to a 2000row x 200column table that gets accessed about 100 times a second so I need to cater for this in my initial design.
I am also wondering, if there is any design philosophies out there that cater for frequently changing, and seldomly changing data based on the engine. This would make sense as I want to keep INSERT/UPDATE time low, and I don't care too much about the SELECT time from php.
I would also love to know how to split up data. I.e. if frequently changing data can be categorised in a few different ways should I have a bunch of tables, representing the data and join them on selects? I am worried about this because I will probably have to make a report to show common properties between all systems (i.e. show all systems with a certain condition).
I hope I have provided enough information here for someone to point me in the right direction, any help on the matter would be great. Or if someone has done something similar and can offer advise I would be very appreciative. Thanks heaps :)
~ Dan
I've posted some questions in a comment. It's hard to give you advice about your rapidly changing data without knowing more about what you're trying to do.
For your configuration data, don't use a 100-column table. Wide tables are notoriously hard to handle in production. Instead, use a four-column table containing these columns:
SYSTEM_ID VARCHAR System identifier
POSTTIME DATETIME The time the information was posted
NAME VARCHAR The name of the parameter
VALUE VARCHAR The value of the parameter
The first three of these columns are your composite primary key.
This design has the advantage that it grows (or shrinks) as you add to (or subtract from) your configuration parameter set. It also allows for the storing of historical data. That means new data points can be INSERTed rather than UPDATEd, which is faster. You can run a daily or weekly job to delete history you're no longer interested in keeping.
(Edit if you really don't need history, get rid of the POSTTIME column and use MySQL's nice extension feature INSERT ON DUPLICATE KEY UPDATE when you post stuff. See http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html)
If your rapidly changing data is similar in form (name/value pairs) to your configuration data, you can use a similar schema to store it.
You may want to create a "current data" table using the MEMORY access method for this stuff. MEMORY tables are very fast to read and write because the data is all in RAM in your MySQL server. The downside is that a MySQL crash and restart will give you an empty table, with the previous contents lost. (MySQL servers crash very infrequently, but when they do they lose MEMORY table contents.)
You can run an occasional job (every few minutes or hours) to copy the contents of your MEMORY table to an on-disk table if you need to save history.
(Edit: You might consider adding memcached http://memcached.org/ to your web application system in the future to handle a high read rate, rather than constructing a database design for version 1 that handles a high read rate. That way you can see which parts of your overall app design have trouble scaling. I wish somebody had convinced me to do this in the past, rather than overdesigning for early versions. )
I created an SSIS package so I can import data from a legacy FoxPro database at scheduled intervals. A copy of the FoxPro databaseis installed for several customers. Overall, the package is working very well and accomplishing all that I need.
However, I have one annoying situation where at least one customer (maybe more) has a modified FP database, where they increased the length of one column in one table. When I run the package on such a customer, it fails because of truncation.
I thought I could just give myself some wiggle room and change the length from 3 to 10. That way the mutants with a length of 10 would be accommodated, as well as everyone else using 3. However, SSIS complains when the column lengths don't match, period.
I suppose I have a few options:
On the task, set 'ValidateExternalMetadata' to false. However, I'm not sure that is the most responsible option... or is it?
Get our implementation team to change the length to 10 for all customers. This could be a problem, but at least it would be their problem.
Create a copy of the task that works for solutions with the different column length. Implementation will likely use the wrong package at some point, and everyone will ask me why I didn't just give them a single package that couldn't handle all scenarios and blame this on me.
Use some other approach you might be able to fill me in on.
If you are using the Visual FoxPro OleDB, and you are concerned about the columns widths, you can explicitly force them by using PADR() during your call. I don't know how many tables / queries this impacts but would guarantee you get your expected character column lengths. If dealing with numeric, decimal, date/time, logical (boolean), should not be an issue... Anyhow, you could do this as your select to get the data
select
t1.Fld1,
t1.Fld2,
padr( t1.CharFld3, 20 ) CharFld3,
padr( t1.CharFld4, 5 ) CharFld4,
t1.OtherFld5,
padr( t1.CharFld6, 35 ) CharFld5
from
YourTable t1
where
SomeCondition
This will force character based (implied sample) fields "CharFld3", "CharFld4", "CharFld6" to a force width of 20, 5 and 35 respectively regardless of the underlying structure length. Now, if someone updates the structure LONGER than what you have it will be truncated down to proper length, but won't crash. Additionally, if they have a shorter column length, it will be padded out to the full size you specify via the PADR() function (pad right).
I'm weak on the FoxPro side, but...
You could create a temporary table that meets the SSIS expectations. Create a task that would use FoxPro instructions to copy the data from the problem table to the temporary table. Alter your data flow to work with the temp table.
You can create the preliminary steps (create temp table and transfer to temp table) as SSIS tasks so that flow control is managed by your SSIS package.