As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
My friend already has his own working web-site (selling some stuff). We have an idea to create the iOs app for the site to attract more people(for me - to gain some badly needed experience).
The UI is going to be simple, and there won't as many problems, as using the web-site's data. We need the app to have some data locally, so that people, who do not have an internet access, were able to use the app.
But, of course, we want the information in the app to be up-to-date, so I need to use MySQL data somehow (I mean, that if the person has an internet access, the app can use it and download some data, If not - the app must contain some data to show). To be honest, I want the app to be really good, so I have a question: What combination is better to use???
To use core data, create a data model(it is huge and it's difficult to reproduce it, a lot of classes to create). I can do it, but how to update the data then? =) Have no idea.
To create a sqlite database, then use something like php code to insert get and encode the data into json, then parse it.
Maybe I should connect to MySQL directly from the app and use it's data, because it's impossible to have same data locally?
Or just to parse it, using json or xml?
Please, help me guys, I need my app to be cool and robust, but I don't know how to do it. Maybe you can tell the better way to solve such a problem??
Generally you'll have to build a similar database inside your application using SQLite and import data from MySQL through some kind of API bridge. A simple way to do this data interchange is via JSON that encodes the record's attributes. XML is also a possible transport mechanism but tends to have more overhead and ends up being trickier to use. What you'll be sending back and forth is generally sets of key-value pairs, not entire documents.
Stick to Core Data unless you have an exceptionally good reason to use something else. Finding it irritating or different is not a good reason. It can be a bit tricky to get the hang of at first, but in practice it tends to be mostly unobtrusive if used correctly.
Unless you're writing something that's expressly a MySQL client, never connect directly to MySQL in an iOS application. Period. Don't even think about doing this. Not only is it impossible to secure effectively, but iOS networking is expected to be extremely unreliable, slow, and often unavailable entirely. Your application must be able to make use of limited bandwidth, deal with very high latency, and break up operations into small transactions that are likely to succeed instead of one long-running operation that is bound to fail.
How do you sync data between your database and your client? That depends on what web stack you're going to be using. You have a lot of options here, but at the very least you should prototype in something like Ruby on Rails, Django, or NodeJS. PHP is viable, but without a database framework will quickly become very messy.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am working with HTML and am attempting to locate the most effecient way to pull images that will be used for banners, backgrounds, etc.
Options
1) Store on server and access them by the server path.
2) Store on a third party site such as imageshack or photobucket and access via URL
3) Store in MYSQL database and access via path. (Not planning on having a ton of images but just throwing this out there.)
I am looking to be effecient in retrieving all images that are going to be displayed on that page but would also like to limit the amount of resources my server will be responsible for. Between these 3 options is there a choice that is overwhelming obvious and I am just not seeing it?
Answers such as the one below would be perfect.(I am looking at my options like this.)
Store on Server - Rely heavily on personal server, downloads etc will be hitting my server, high load/high responsibility
Store on third party site - Images off of server, saves me space and some of the load(But is it worth the hassle?)
DB Link - Quickest, best for tons of images, rely heavily on personal server
If this question is to opinion based then I will assume all 3 ways are equally effective and will close the question.
Store the images on a CDN and store the URLs of the images in a database.
The primary advantage here that is not present in other options is that of caching. If you use a database, it needs to be queried and a server script (.ashx for the .net framework i often use) needs to return this resource. With imageshack etc. I'm not sure, but I think the images retrieved are not cached.
The advantage here is that you don't lose bandwidth and storage space.
No advantages I can think of other than if you need to version control your images or something.
If you're solely working in HTML then storing on a server isn't possible as you would need a server side language to connect the DB and the page. If you have some PHP, ASP, Ruby ect knowledge then you can store on the server.
I think the answer is dependant on what the site/application is.
If (like you said) you're using the images for banners, background and things like that. Then maybe it's easiest to store them on your server and link to them on the page like <img src="/Source" alt="Image"/> (or do the backgrounds in CSS)
Make sure you are caching images so that they'll load quicker for users after the first view.
Most servers are pretty fast so I wouldn't worry too much about speeds ... unless the images you're using are huge (which anyone would tell you isn't recommended for a website anyway)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have to write a database in Access 2010 and i need to use VBA also (I have never used it). A thought that the times came to learn a little about VBA and VB. I would like to read through a VB tutorial also just to know a little bit about that too. But i found a lot of VB for example 6.0, 2005, 2008, 2010.
My question is: If I want to learn VBA in Access 2010 which VBA version should I study (link would be good), and which version of VB?
VBA and VB are not the same, particularly VB in the context of the .NET framework. If you want to be able to program within Access, then you need VBA, not VB. Get a book which covers Access VBA - if you don't like Banjoe's suggestion, there are plenty with fewer pages, and tons of material accessible via Google.
I've always found the WROX books to be fairly comprehensive and full of useful, real-world examples. For example: Access-2007-Programmers-Reference
In the beginning try to stick with bound forms/reports as much as possible. You can do a lot without VBA and once you start custom coding things it tends to snowball.
If you're new to database design make sure you read up on how to properly normalize your data. Designing your database properly will save you tons of time in the long run. See: here for one example.
I would suggest you are asking the wrong question. Access is a point-and-click development tool, not a programming language. So, what you need to learn is how to use Access to create applications. That means creating user interface objects interactively and then extending them with code.
However, one thing to keep in in mind is that A2010 has new powerful macros with branching and logic and error handling. These are quite robust because all the features of the new Access Web Databases (usable with Sharepoint using Access Services, and runnable in a web browser) are built on top of these macros.
So, I would suggest that you invest time in learning how to create web objects in addition to learning how to sprinkle in some VBA code to extend the behavior of your Access UI objects (and the VBA code won't run in a Web database, BTW).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Is someone out there who is using ZooKeeper for their sites? If you do, what do you use it for? i just want to see real word use case?
I've just started doing the research for using Zookeeper for a number of cases in my companies infrastructure.
The one that seems to fit ZK the best is where we have an array of 30+ dynamic content servers that rely heavily on file based caching ( Memcached is too slow ). Each of these servers will have an agent watching a specific ZK path and when a new node shows up, all servers join into a barrier lock, then once all of them are present, they all update their configuration at the exact same time. This way we can keep all 30 servers configuration / run-states consistent.
Second use case, we receive 45-70 million page views a day in a typical bell curve like pattern. The caching strategy implemented falls from client, to CDN, to memcache, and then to file cache before determining when to make a DB call. Even with a series of locks in place, it's pretty typical to get race conditions ( I've nicknamed them stampedes ) that can strain our backend. The hope is that ZK can provide a tool for developing a consistent and unified locking service across multiple servers and maybe data-centers.
You may be interested in the recently published scientific paper on ZooKeeper:
http://research.yahoo.com/node/3280
The paper also describes three use cases and comparable projects.
We do use ZK as a dependency of HBase and have implemented a scheduled work queue for a feed reader (millions of feeds) with it.
The ZooKeeper "PoweredBy" page has some detail that you might find interesting:
https://cwiki.apache.org/confluence/display/ZOOKEEPER/PoweredBy
HBase uses ZK and is open source (Apache) which would allow you to look at actual code.
http://hbase.apache.org/
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I appreciate a lot CouchDB attempt to use universal web formats in everything it does: RESTFUL HTTP methods in every interaction, JSON objects, javascript code to customize database and documents.
CouchDB seems to scale pretty well, but the individual cost to make a request usually makes 'relational' people afraid of.
Many small business applications should deal with only one machine and that's all. In this case the scalability talk doesn't say too much, we need more performance per request, or people will not use it.
BERT (Binary ERlang Term http://bert-rpc.org/ ) has proven to be a faster and lighter format than JSON and it is native for Erlang, the language in which CouchDB is written. Could we benefit from that, using BERT documents instead of JSON ones?
I'm not saying just for retrieving in views, but for everything CouchDB does, including syncing. And, as a consequence of it, use Erlang functions instead of javascript ones.
This would modify some original CouchDB principles, because today it is very web oriented. Considering I imagine few people would make their database API public and usually its data is accessed by the users through an application, it would be a good deal to have the ability to configure CouchDB for working faster. HTTP+JSON calls could still be handled by CouchDB, considering an extra cost in these cases because of parsing.
You can have a look at hovercraft. It provides a native Erlang interface to CouchDB. Combining this with Erlang views, which CouchDB already supports, you can have an sort-of all-Erlang CouchDB (some external libraries, such as ICU, will still need to be installed).
CouchDB wants maximum data portability. You can't parse BERT in the browser which means it's not portable to the largest platform in the world so that's kind of a non-starter for base CouchDB. Although there might be a place for BERT in hovercraft as mentioned above.
I think it would be first good to measure how much of overhead is due to JSON processing: JSON handling can be very efficient. For example these results suggest that JSON is the fastest schema-less data format on Java platform (protocol buffer requires strict schema, ditto for avro; kryo is java serializer); and I would assume that same could be done on other platforms too (with erlang; for browsers via native support).
So, "if it ain't broke, don't fix it". JSON is very fast, when properly implemented; and if space usage is concern, it compresses well just like any textual formats.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Has using an acknowledged anti-pattern ever been proven to actually work in a certain specific case? Did you ever solve a problem or gain any kind of benefit in one of your projects by using an anti-pattern?
My understanding of the "anti-pattern" concept is that it encompasses solutions that have drawbacks that only reveal themselves over the long term. Indeed, the primary danger associated with a lot of them---like writing spaghetti code with loads of global variables and gotos every which way, or tossing exceptions into the black hole of an empty catch block---is that they're seductive because they provide an expedient solution to an immediate problem.
EDIT to add: Because of that, sometimes you do derive benefit from these anti-patterns. Sometimes your calculation that you're writing throwaway code that no one will touch again is dead wrong and you wind up with maintenance programmers slandering your heritage and sexual hygiene, but other times you're right and that crummy shell script that's held together with baling wire and spit does the job you intended it to do and is then blessedly forgotten, saving you the considerable time and effort of putting together something decent.
Anti-Patterns are still so widely around just because they solve a particular problem (while creating 10 new ones). Also known as workaround. But how do they say? Nothing lasts longer than a makeshift.
In fact I believe we'd all be jobless if things had been done right from the beginning.
The biggest problem that it has solved in my experience is launching a new application.
When the dev team has scoped the new application thouroughly, the timeline to implement the correct solution is usually too much for management to bear. Therefore, oftentimes, you code to meet the timeline, rather than "correctness" of the solution to get to the launch date, (but have others coding the "correct" solution for the next rev), making it essentially "throw-away" code.
One software anti-pattern is Softcoding, also defined at the daily WTF. Softcoding happens when programmers put material that "should be" inside code into external resources.
I'm working with software that some might say is suffering from softcoding. External files drive the software. Those external files are a micro-language: they must be compiled to XML before the software can use them. This micro-language has its own tools.
But softcoding is always in the mind of the beholder.
Having the material in a micro-language with its own parser has made my life easier. One data source can generate many different outputs: In addition to the version that the main program uses, I am able to extract information into HTML, .csv, and other formats that our customers want. Other programs can generate code in the micro-language, making automation easier.
In our case, softcoding has been a useful pattern, not an anti-pattern.
There is a reason for calling it a pattern rather than a law.
I would surmise that almost everyone has at least one example of a place in code where exactly the wrong thing was done, and it turned out better in the long term than the "right" thing would have.
And a far longer list of examples of anti-patterns causing trouble.
I have used magic pushbuttons a number of times, out of ignorance or laziness, and sometimes it actually worked out just fine, and it turned out that I did not need the extra abstraction of proper MVC.
Duff's Device utilizes the Loop-Switch Sequence (AKA For-Case Paradigm) anti-pattern.