HTML and Image Retrieval [closed] - html

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am working with HTML and am attempting to locate the most effecient way to pull images that will be used for banners, backgrounds, etc.
Options
1) Store on server and access them by the server path.
2) Store on a third party site such as imageshack or photobucket and access via URL
3) Store in MYSQL database and access via path. (Not planning on having a ton of images but just throwing this out there.)
I am looking to be effecient in retrieving all images that are going to be displayed on that page but would also like to limit the amount of resources my server will be responsible for. Between these 3 options is there a choice that is overwhelming obvious and I am just not seeing it?
Answers such as the one below would be perfect.(I am looking at my options like this.)
Store on Server - Rely heavily on personal server, downloads etc will be hitting my server, high load/high responsibility
Store on third party site - Images off of server, saves me space and some of the load(But is it worth the hassle?)
DB Link - Quickest, best for tons of images, rely heavily on personal server
If this question is to opinion based then I will assume all 3 ways are equally effective and will close the question.

Store the images on a CDN and store the URLs of the images in a database.

The primary advantage here that is not present in other options is that of caching. If you use a database, it needs to be queried and a server script (.ashx for the .net framework i often use) needs to return this resource. With imageshack etc. I'm not sure, but I think the images retrieved are not cached.
The advantage here is that you don't lose bandwidth and storage space.
No advantages I can think of other than if you need to version control your images or something.

If you're solely working in HTML then storing on a server isn't possible as you would need a server side language to connect the DB and the page. If you have some PHP, ASP, Ruby ect knowledge then you can store on the server.
I think the answer is dependant on what the site/application is.
If (like you said) you're using the images for banners, background and things like that. Then maybe it's easiest to store them on your server and link to them on the page like <img src="/Source" alt="Image"/> (or do the backgrounds in CSS)
Make sure you are caching images so that they'll load quicker for users after the first view.
Most servers are pretty fast so I wouldn't worry too much about speeds ... unless the images you're using are huge (which anyone would tell you isn't recommended for a website anyway)

Related

Using Mysql in iOs App [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
My friend already has his own working web-site (selling some stuff). We have an idea to create the iOs app for the site to attract more people(for me - to gain some badly needed experience).
The UI is going to be simple, and there won't as many problems, as using the web-site's data. We need the app to have some data locally, so that people, who do not have an internet access, were able to use the app.
But, of course, we want the information in the app to be up-to-date, so I need to use MySQL data somehow (I mean, that if the person has an internet access, the app can use it and download some data, If not - the app must contain some data to show). To be honest, I want the app to be really good, so I have a question: What combination is better to use???
To use core data, create a data model(it is huge and it's difficult to reproduce it, a lot of classes to create). I can do it, but how to update the data then? =) Have no idea.
To create a sqlite database, then use something like php code to insert get and encode the data into json, then parse it.
Maybe I should connect to MySQL directly from the app and use it's data, because it's impossible to have same data locally?
Or just to parse it, using json or xml?
Please, help me guys, I need my app to be cool and robust, but I don't know how to do it. Maybe you can tell the better way to solve such a problem??
Generally you'll have to build a similar database inside your application using SQLite and import data from MySQL through some kind of API bridge. A simple way to do this data interchange is via JSON that encodes the record's attributes. XML is also a possible transport mechanism but tends to have more overhead and ends up being trickier to use. What you'll be sending back and forth is generally sets of key-value pairs, not entire documents.
Stick to Core Data unless you have an exceptionally good reason to use something else. Finding it irritating or different is not a good reason. It can be a bit tricky to get the hang of at first, but in practice it tends to be mostly unobtrusive if used correctly.
Unless you're writing something that's expressly a MySQL client, never connect directly to MySQL in an iOS application. Period. Don't even think about doing this. Not only is it impossible to secure effectively, but iOS networking is expected to be extremely unreliable, slow, and often unavailable entirely. Your application must be able to make use of limited bandwidth, deal with very high latency, and break up operations into small transactions that are likely to succeed instead of one long-running operation that is bound to fail.
How do you sync data between your database and your client? That depends on what web stack you're going to be using. You have a lot of options here, but at the very least you should prototype in something like Ruby on Rails, Django, or NodeJS. PHP is viable, but without a database framework will quickly become very messy.

Decentralized backup using torrent protocol [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm playing with an idea of creating client that would use the torrent protocol used today in torrent download client such as uTorrrent or Vuze to create:
Client software that would:
Select files you would like to backup
Create torrent like descriptor files for each file
Offer optional encryption of your files based on key phrase
Let you select redundancy you would like to trade with other clients
(Redundancy would be based on give-and-take principle. If you want to backup 100MB five times you would have to offer extra 500MB of your own storage space in your system. The file backup would not get distributed only amongst 5 clients but it would utilize as many clients as possible offering storage in exchange based on physical distance specified in settings)
Optionally:
I'm thinking to include edge file sharing. If you would have non encrypted files shared in you backup storage and would prefer clients that have their port 80 open for public HTTP sharing. But this gets tricking since I have hard time coming up with simple scheme where the visitor would pick the closest backup client.
Include file manager that would allow file transfers (something like FTP with GUI) style between two systems using torrent protocol.
I'm thinking about creating this as service API project (sort of like http://www.elasticsearch.org ) that could be integrated with any container such as tomcat and spring or just plain Swing.
This would be P2P open source project. Since I'm not completely confident in my understanding of torrent protocol the question is:
Is the above feasible with current state of the torrent technology (and where should I look to recruit java developers for this project)
If this is the wrong spot to post this please move it to more appropriate site.
You are considering the wrong technology for the job. What you want is an erasure code using Vandermonde matrixes. What this allows you to do is get the same level of protection against lost data without needing to store nearly as many copies. There's an open source implementation by Luigi Rizzo that works perfectly.
What this code allows you to do is take a 8MB chunk of data and cut it into any number of 1MB chunks such that any eight of them can reconstruct the original data. This allows you to get the same level of protection as tripling the size of the data stored without even doubling the size of the data stored.
You can tune the parameters any way you want. With Luigi Rizzo's implementation, there's a limit of 256 chunks. But you can control the chunk size and the number of chunks required to reconstruct the data.
You do not need to generate or store all the possible chunks. If you cut an 80MB chunk of data into 8MB chunks such that any ten can recover the original data, you can construct up to 256 such chunks. You will likely only want 20 or so.
You might have great difficulty enforcing the reciprocal storage feature, which I believe is critical to large-scale adoption (finally, a good use for those three terabyte drives that you get in cereal boxes!) You might wish to study the mechanisms of BitCoin to see if there are any tools you can steal or adopt for your own needs for distributed non-repudiable proof of storage.

To captcha or not to captcha [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to know what do you think about adding captcha mechanisms to registration forms?
I notice that many sites don't use captcha mechanisms in their registration forms(examples: http://djdesignerlab.com/2010/04/14/25-cool-sign-up-and-login-form-designs/).
I would like to open this topic to see what people thinks.
I always thought that we should make our forms as secured as possible, but from another point of view there are many users out there that don't really have to much patience to fill a captcha at a registration page.
-Do you think adding this mechanism to a registration page can drastically drop the amount of registered users at long term?
-How dangerous can it be not including this mechanism at a registration page?
I setup captcha on my system for one main reason: To know that the user who register or is registered is actually human. Don't forget that captcha is not only used for registration and/or login security checks. SO, for example, have captcha if it senses that there's too frequent edit on the same question/answers in a very short time span. Captcha, in this case, is a check to see if the editor is human instead of a robot.
In essence, you have to make a good decision of where you will like to use captcha (if you're planning to use it) and how will it serve for your purpose.
Hope this helps.
I can't stand captcha's at all. I understand the need for checking against bots, but why should the legitimate end user have to pay the price in reading obfuscated words... that's my personal opinion.
I have seen some sites actually ask basic questions such as, "The colour of the sky is: " provided with a textbox and clue to the word length. Its a bit more on the fly but to be honest I have had no problem getting the right answer with the ones I have seen.
I refuse to implement it - its a big 'F U' to users. The only exceptions are those where numbers are required... these are much better as there is no casing involved, half the time with letter based captchas you can hardly tell which letter are uppercase or lowercase.
We've come a very long way in web accessibility, captcha's are sending us in the wrong direction. Recaptcha does serve a purpose I agree, but its still a captcha.
Danger is one thing, but the flood of SPAM you will get is another. I have seen situations where a commenting system was rendered useless because of the SPAM that was being added.
There are definitely issues with CAPTCHA's beyond simple inconvenience. There are accessibility issues with a lot of them. I prefer RECAPTCHA which does a really nice job of handling accessibility while performing a service at the same time.
There are other options out there, Akismet is a verification tool that does not require user input. I would recommend looking at that if you are trying to avoid the manual verification process.
I think it's a case by case situation. If your site is public and popular and bots could gain a financial value for a clever programmer by posting content to your site, then the captcha is the way to go.
If you find that your site does not get much traffic or it is on a private network, then there is no point to employ a captcha.
I would suggest going without it at first, then pull it out of your tool belt if spam becomes a problem.
Me and a few of my fellow small publishing friends created a private database to pool IP addresses and netblocks of known spammers. Some of us have removed our recaptcha integration in favor of backend IP check. Some backlink spammers are getting through, but its slowing down as the database gets larger. We've opened up the api so others can give it a try: http://www.spamerator.com
CAPTCHA? Fine. Set it up. But please make it human-friendly, like this one:
The letters are clear, big and readable. And if you don't use images, I have implemented a base64 one in addition.

If you are using Apache ZooKeeper, what do you use it for? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Is someone out there who is using ZooKeeper for their sites? If you do, what do you use it for? i just want to see real word use case?
I've just started doing the research for using Zookeeper for a number of cases in my companies infrastructure.
The one that seems to fit ZK the best is where we have an array of 30+ dynamic content servers that rely heavily on file based caching ( Memcached is too slow ). Each of these servers will have an agent watching a specific ZK path and when a new node shows up, all servers join into a barrier lock, then once all of them are present, they all update their configuration at the exact same time. This way we can keep all 30 servers configuration / run-states consistent.
Second use case, we receive 45-70 million page views a day in a typical bell curve like pattern. The caching strategy implemented falls from client, to CDN, to memcache, and then to file cache before determining when to make a DB call. Even with a series of locks in place, it's pretty typical to get race conditions ( I've nicknamed them stampedes ) that can strain our backend. The hope is that ZK can provide a tool for developing a consistent and unified locking service across multiple servers and maybe data-centers.
You may be interested in the recently published scientific paper on ZooKeeper:
http://research.yahoo.com/node/3280
The paper also describes three use cases and comparable projects.
We do use ZK as a dependency of HBase and have implemented a scheduled work queue for a feed reader (millions of feeds) with it.
The ZooKeeper "PoweredBy" page has some detail that you might find interesting:
https://cwiki.apache.org/confluence/display/ZOOKEEPER/PoweredBy
HBase uses ZK and is open source (Apache) which would allow you to look at actual code.
http://hbase.apache.org/

Could CouchDB benefit significantly from the use of BERT instead of JSON? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I appreciate a lot CouchDB attempt to use universal web formats in everything it does: RESTFUL HTTP methods in every interaction, JSON objects, javascript code to customize database and documents.
CouchDB seems to scale pretty well, but the individual cost to make a request usually makes 'relational' people afraid of.
Many small business applications should deal with only one machine and that's all. In this case the scalability talk doesn't say too much, we need more performance per request, or people will not use it.
BERT (Binary ERlang Term http://bert-rpc.org/ ) has proven to be a faster and lighter format than JSON and it is native for Erlang, the language in which CouchDB is written. Could we benefit from that, using BERT documents instead of JSON ones?
I'm not saying just for retrieving in views, but for everything CouchDB does, including syncing. And, as a consequence of it, use Erlang functions instead of javascript ones.
This would modify some original CouchDB principles, because today it is very web oriented. Considering I imagine few people would make their database API public and usually its data is accessed by the users through an application, it would be a good deal to have the ability to configure CouchDB for working faster. HTTP+JSON calls could still be handled by CouchDB, considering an extra cost in these cases because of parsing.
You can have a look at hovercraft. It provides a native Erlang interface to CouchDB. Combining this with Erlang views, which CouchDB already supports, you can have an sort-of all-Erlang CouchDB (some external libraries, such as ICU, will still need to be installed).
CouchDB wants maximum data portability. You can't parse BERT in the browser which means it's not portable to the largest platform in the world so that's kind of a non-starter for base CouchDB. Although there might be a place for BERT in hovercraft as mentioned above.
I think it would be first good to measure how much of overhead is due to JSON processing: JSON handling can be very efficient. For example these results suggest that JSON is the fastest schema-less data format on Java platform (protocol buffer requires strict schema, ditto for avro; kryo is java serializer); and I would assume that same could be done on other platforms too (with erlang; for browsers via native support).
So, "if it ain't broke, don't fix it". JSON is very fast, when properly implemented; and if space usage is concern, it compresses well just like any textual formats.